00:00:00.000 Started by upstream project "autotest-per-patch" build number 120552 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21506 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/3 # timeout=5 00:00:06.901 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.913 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.927 Checking out Revision 905a9901102cedbe148ad73cb882ca0b70f8f559 (FETCH_HEAD) 00:00:06.927 > git config core.sparsecheckout # timeout=10 00:00:06.938 > git read-tree -mu HEAD # timeout=10 00:00:06.955 > git checkout -f 905a9901102cedbe148ad73cb882ca0b70f8f559 # timeout=5 00:00:06.976 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:06.976 > git rev-list --no-walk 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=10 00:00:07.097 [Pipeline] Start of Pipeline 00:00:07.112 [Pipeline] library 00:00:07.114 Loading library shm_lib@master 00:00:07.114 Library shm_lib@master is cached. Copying from home. 00:00:07.135 [Pipeline] node 00:00:22.137 Still waiting to schedule task 00:00:22.137 Waiting for next available executor on ‘vagrant-vm-host’ 00:09:56.252 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:09:56.254 [Pipeline] { 00:09:56.267 [Pipeline] catchError 00:09:56.268 [Pipeline] { 00:09:56.283 [Pipeline] wrap 00:09:56.290 [Pipeline] { 00:09:56.299 [Pipeline] stage 00:09:56.300 [Pipeline] { (Prologue) 00:09:56.321 [Pipeline] echo 00:09:56.323 Node: VM-host-SM0 00:09:56.328 [Pipeline] cleanWs 00:09:56.336 [WS-CLEANUP] Deleting project workspace... 00:09:56.336 [WS-CLEANUP] Deferred wipeout is used... 00:09:56.341 [WS-CLEANUP] done 00:09:56.560 [Pipeline] setCustomBuildProperty 00:09:56.635 [Pipeline] nodesByLabel 00:09:56.637 Found a total of 1 nodes with the 'sorcerer' label 00:09:56.647 [Pipeline] httpRequest 00:09:56.651 HttpMethod: GET 00:09:56.652 URL: http://10.211.164.101/packages/jbp_905a9901102cedbe148ad73cb882ca0b70f8f559.tar.gz 00:09:56.658 Sending request to url: http://10.211.164.101/packages/jbp_905a9901102cedbe148ad73cb882ca0b70f8f559.tar.gz 00:09:56.662 Response Code: HTTP/1.1 200 OK 00:09:56.662 Success: Status code 200 is in the accepted range: 200,404 00:09:56.663 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_905a9901102cedbe148ad73cb882ca0b70f8f559.tar.gz 00:09:56.801 [Pipeline] sh 00:09:57.083 + tar --no-same-owner -xf jbp_905a9901102cedbe148ad73cb882ca0b70f8f559.tar.gz 00:09:57.099 [Pipeline] httpRequest 00:09:57.103 HttpMethod: GET 00:09:57.104 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:09:57.104 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:09:57.107 Response Code: HTTP/1.1 200 OK 00:09:57.108 Success: Status code 200 is in the accepted range: 200,404 00:09:57.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:09:59.267 [Pipeline] sh 00:09:59.544 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:10:02.837 [Pipeline] sh 00:10:03.115 + git -C spdk log --oneline -n5 00:10:03.115 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:10:03.115 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:10:03.115 54944c1d1 event: don't NOTICELOG when no RPC server started 00:10:03.115 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:10:03.115 5dc808124 init: add spdk_subsystem_exists() 00:10:03.134 [Pipeline] writeFile 00:10:03.149 [Pipeline] sh 00:10:03.427 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:03.439 [Pipeline] sh 00:10:03.718 + cat autorun-spdk.conf 00:10:03.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:03.718 SPDK_TEST_NVMF=1 00:10:03.718 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:03.718 SPDK_TEST_USDT=1 00:10:03.718 SPDK_TEST_NVMF_MDNS=1 00:10:03.718 SPDK_RUN_ASAN=1 00:10:03.718 SPDK_RUN_UBSAN=1 00:10:03.718 NET_TYPE=virt 00:10:03.718 SPDK_JSONRPC_GO_CLIENT=1 00:10:03.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:03.724 RUN_NIGHTLY=0 00:10:03.726 [Pipeline] } 00:10:03.742 [Pipeline] // stage 00:10:03.756 [Pipeline] stage 00:10:03.758 [Pipeline] { (Run VM) 00:10:03.771 [Pipeline] sh 00:10:04.050 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:04.050 + echo 'Start stage prepare_nvme.sh' 00:10:04.050 Start stage prepare_nvme.sh 00:10:04.050 + [[ -n 3 ]] 00:10:04.050 + disk_prefix=ex3 00:10:04.050 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:10:04.050 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:10:04.050 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:10:04.050 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:04.050 ++ SPDK_TEST_NVMF=1 00:10:04.050 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:04.050 ++ SPDK_TEST_USDT=1 00:10:04.050 ++ SPDK_TEST_NVMF_MDNS=1 00:10:04.050 ++ SPDK_RUN_ASAN=1 00:10:04.050 ++ SPDK_RUN_UBSAN=1 00:10:04.050 ++ NET_TYPE=virt 00:10:04.050 ++ SPDK_JSONRPC_GO_CLIENT=1 00:10:04.050 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:04.050 ++ RUN_NIGHTLY=0 00:10:04.050 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:10:04.050 + nvme_files=() 00:10:04.050 + declare -A nvme_files 00:10:04.050 + backend_dir=/var/lib/libvirt/images/backends 00:10:04.050 + nvme_files['nvme.img']=5G 00:10:04.050 + nvme_files['nvme-cmb.img']=5G 00:10:04.050 + nvme_files['nvme-multi0.img']=4G 00:10:04.050 + nvme_files['nvme-multi1.img']=4G 00:10:04.050 + nvme_files['nvme-multi2.img']=4G 00:10:04.050 + nvme_files['nvme-openstack.img']=8G 00:10:04.050 + nvme_files['nvme-zns.img']=5G 00:10:04.050 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:04.050 + (( SPDK_TEST_FTL == 1 )) 00:10:04.050 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:04.050 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:10:04.050 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:04.050 + for nvme in "${!nvme_files[@]}" 00:10:04.050 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:10:04.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:04.310 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:10:04.310 + echo 'End stage prepare_nvme.sh' 00:10:04.310 End stage prepare_nvme.sh 00:10:04.321 [Pipeline] sh 00:10:04.601 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:04.601 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:10:04.601 00:10:04.601 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:10:04.601 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:10:04.601 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:10:04.601 HELP=0 00:10:04.601 DRY_RUN=0 00:10:04.601 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:10:04.601 NVME_DISKS_TYPE=nvme,nvme, 00:10:04.601 NVME_AUTO_CREATE=0 00:10:04.601 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:10:04.601 NVME_CMB=,, 00:10:04.601 NVME_PMR=,, 00:10:04.601 NVME_ZNS=,, 00:10:04.601 NVME_MS=,, 00:10:04.601 NVME_FDP=,, 00:10:04.601 SPDK_VAGRANT_DISTRO=fedora38 00:10:04.601 SPDK_VAGRANT_VMCPU=10 00:10:04.601 SPDK_VAGRANT_VMRAM=12288 00:10:04.601 SPDK_VAGRANT_PROVIDER=libvirt 00:10:04.601 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:04.602 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:04.602 SPDK_OPENSTACK_NETWORK=0 00:10:04.602 VAGRANT_PACKAGE_BOX=0 00:10:04.602 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:10:04.602 FORCE_DISTRO=true 00:10:04.602 VAGRANT_BOX_VERSION= 00:10:04.602 EXTRA_VAGRANTFILES= 00:10:04.602 NIC_MODEL=e1000 00:10:04.602 00:10:04.602 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:10:04.602 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:10:07.882 Bringing machine 'default' up with 'libvirt' provider... 00:10:08.815 ==> default: Creating image (snapshot of base box volume). 00:10:08.815 ==> default: Creating domain with the following settings... 00:10:08.815 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713438076_129beb3d60796e2f1ba4 00:10:08.815 ==> default: -- Domain type: kvm 00:10:08.815 ==> default: -- Cpus: 10 00:10:08.815 ==> default: -- Feature: acpi 00:10:08.815 ==> default: -- Feature: apic 00:10:08.815 ==> default: -- Feature: pae 00:10:08.815 ==> default: -- Memory: 12288M 00:10:08.815 ==> default: -- Memory Backing: hugepages: 00:10:08.815 ==> default: -- Management MAC: 00:10:08.815 ==> default: -- Loader: 00:10:08.815 ==> default: -- Nvram: 00:10:08.815 ==> default: -- Base box: spdk/fedora38 00:10:08.815 ==> default: -- Storage pool: default 00:10:08.815 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713438076_129beb3d60796e2f1ba4.img (20G) 00:10:08.815 ==> default: -- Volume Cache: default 00:10:08.815 ==> default: -- Kernel: 00:10:08.815 ==> default: -- Initrd: 00:10:08.815 ==> default: -- Graphics Type: vnc 00:10:08.815 ==> default: -- Graphics Port: -1 00:10:08.815 ==> default: -- Graphics IP: 127.0.0.1 00:10:08.815 ==> default: -- Graphics Password: Not defined 00:10:08.815 ==> default: -- Video Type: cirrus 00:10:08.815 ==> default: -- Video VRAM: 9216 00:10:08.815 ==> default: -- Sound Type: 00:10:08.815 ==> default: -- Keymap: en-us 00:10:09.074 ==> default: -- TPM Path: 00:10:09.074 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:09.074 ==> default: -- Command line args: 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:09.074 ==> default: -> value=-drive, 00:10:09.074 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:09.074 ==> default: -> value=-drive, 00:10:09.074 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.074 ==> default: -> value=-drive, 00:10:09.074 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.074 ==> default: -> value=-drive, 00:10:09.074 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:10:09.074 ==> default: -> value=-device, 00:10:09.074 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:09.074 ==> default: Creating shared folders metadata... 00:10:09.074 ==> default: Starting domain. 00:10:10.974 ==> default: Waiting for domain to get an IP address... 00:10:29.074 ==> default: Waiting for SSH to become available... 00:10:30.009 ==> default: Configuring and enabling network interfaces... 00:10:34.193 default: SSH address: 192.168.121.128:22 00:10:34.193 default: SSH username: vagrant 00:10:34.193 default: SSH auth method: private key 00:10:36.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:10:44.202 ==> default: Mounting SSHFS shared folder... 00:10:45.137 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:10:45.137 ==> default: Checking Mount.. 00:10:46.509 ==> default: Folder Successfully Mounted! 00:10:46.509 ==> default: Running provisioner: file... 00:10:47.075 default: ~/.gitconfig => .gitconfig 00:10:47.641 00:10:47.641 SUCCESS! 00:10:47.641 00:10:47.641 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:10:47.641 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:10:47.641 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:10:47.641 00:10:47.651 [Pipeline] } 00:10:47.669 [Pipeline] // stage 00:10:47.678 [Pipeline] dir 00:10:47.679 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:10:47.681 [Pipeline] { 00:10:47.695 [Pipeline] catchError 00:10:47.697 [Pipeline] { 00:10:47.711 [Pipeline] sh 00:10:47.989 + vagrant ssh-config --host vagrant 00:10:47.989 + sed -ne /^Host/,$p 00:10:47.989 + tee ssh_conf 00:10:52.176 Host vagrant 00:10:52.176 HostName 192.168.121.128 00:10:52.176 User vagrant 00:10:52.176 Port 22 00:10:52.176 UserKnownHostsFile /dev/null 00:10:52.176 StrictHostKeyChecking no 00:10:52.176 PasswordAuthentication no 00:10:52.176 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:10:52.176 IdentitiesOnly yes 00:10:52.176 LogLevel FATAL 00:10:52.176 ForwardAgent yes 00:10:52.176 ForwardX11 yes 00:10:52.176 00:10:52.201 [Pipeline] withEnv 00:10:52.203 [Pipeline] { 00:10:52.212 [Pipeline] sh 00:10:52.522 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:10:52.522 source /etc/os-release 00:10:52.522 [[ -e /image.version ]] && img=$(< /image.version) 00:10:52.522 # Minimal, systemd-like check. 00:10:52.522 if [[ -e /.dockerenv ]]; then 00:10:52.522 # Clear garbage from the node's name: 00:10:52.522 # agt-er_autotest_547-896 -> autotest_547-896 00:10:52.522 # $HOSTNAME is the actual container id 00:10:52.522 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:10:52.522 if mountpoint -q /etc/hostname; then 00:10:52.522 # We can assume this is a mount from a host where container is running, 00:10:52.522 # so fetch its hostname to easily identify the target swarm worker. 00:10:52.522 container="$(< /etc/hostname) ($agent)" 00:10:52.522 else 00:10:52.522 # Fallback 00:10:52.522 container=$agent 00:10:52.522 fi 00:10:52.522 fi 00:10:52.522 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:10:52.522 00:10:52.790 [Pipeline] } 00:10:52.811 [Pipeline] // withEnv 00:10:52.819 [Pipeline] setCustomBuildProperty 00:10:52.833 [Pipeline] stage 00:10:52.835 [Pipeline] { (Tests) 00:10:52.855 [Pipeline] sh 00:10:53.135 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:10:53.406 [Pipeline] timeout 00:10:53.407 Timeout set to expire in 40 min 00:10:53.409 [Pipeline] { 00:10:53.424 [Pipeline] sh 00:10:53.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:10:54.273 HEAD is now at 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:10:54.284 [Pipeline] sh 00:10:54.561 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:10:54.832 [Pipeline] sh 00:10:55.110 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:10:55.380 [Pipeline] sh 00:10:55.656 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:10:55.915 ++ readlink -f spdk_repo 00:10:55.915 + DIR_ROOT=/home/vagrant/spdk_repo 00:10:55.915 + [[ -n /home/vagrant/spdk_repo ]] 00:10:55.915 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:10:55.915 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:10:55.915 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:10:55.915 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:10:55.915 + [[ -d /home/vagrant/spdk_repo/output ]] 00:10:55.915 + cd /home/vagrant/spdk_repo 00:10:55.915 + source /etc/os-release 00:10:55.915 ++ NAME='Fedora Linux' 00:10:55.915 ++ VERSION='38 (Cloud Edition)' 00:10:55.915 ++ ID=fedora 00:10:55.915 ++ VERSION_ID=38 00:10:55.915 ++ VERSION_CODENAME= 00:10:55.915 ++ PLATFORM_ID=platform:f38 00:10:55.915 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:10:55.915 ++ ANSI_COLOR='0;38;2;60;110;180' 00:10:55.915 ++ LOGO=fedora-logo-icon 00:10:55.915 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:10:55.915 ++ HOME_URL=https://fedoraproject.org/ 00:10:55.915 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:10:55.915 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:10:55.915 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:10:55.915 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:10:55.915 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:10:55.915 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:10:55.915 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:10:55.915 ++ SUPPORT_END=2024-05-14 00:10:55.915 ++ VARIANT='Cloud Edition' 00:10:55.915 ++ VARIANT_ID=cloud 00:10:55.915 + uname -a 00:10:55.915 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:10:55.915 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:56.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.482 Hugepages 00:10:56.482 node hugesize free / total 00:10:56.482 node0 1048576kB 0 / 0 00:10:56.482 node0 2048kB 0 / 0 00:10:56.482 00:10:56.482 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:56.482 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:56.482 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:56.482 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:56.482 + rm -f /tmp/spdk-ld-path 00:10:56.482 + source autorun-spdk.conf 00:10:56.482 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:56.482 ++ SPDK_TEST_NVMF=1 00:10:56.482 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:56.482 ++ SPDK_TEST_USDT=1 00:10:56.482 ++ SPDK_TEST_NVMF_MDNS=1 00:10:56.482 ++ SPDK_RUN_ASAN=1 00:10:56.482 ++ SPDK_RUN_UBSAN=1 00:10:56.482 ++ NET_TYPE=virt 00:10:56.482 ++ SPDK_JSONRPC_GO_CLIENT=1 00:10:56.482 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:56.482 ++ RUN_NIGHTLY=0 00:10:56.482 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:10:56.482 + [[ -n '' ]] 00:10:56.482 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:10:56.482 + for M in /var/spdk/build-*-manifest.txt 00:10:56.482 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:10:56.482 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:56.482 + for M in /var/spdk/build-*-manifest.txt 00:10:56.482 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:10:56.482 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:56.482 ++ uname 00:10:56.482 + [[ Linux == \L\i\n\u\x ]] 00:10:56.482 + sudo dmesg -T 00:10:56.482 + sudo dmesg --clear 00:10:56.482 + dmesg_pid=5147 00:10:56.482 + sudo dmesg -Tw 00:10:56.482 + [[ Fedora Linux == FreeBSD ]] 00:10:56.482 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.482 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.482 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:10:56.482 + [[ -x /usr/src/fio-static/fio ]] 00:10:56.482 + export FIO_BIN=/usr/src/fio-static/fio 00:10:56.482 + FIO_BIN=/usr/src/fio-static/fio 00:10:56.482 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:10:56.482 + [[ ! -v VFIO_QEMU_BIN ]] 00:10:56.482 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:10:56.482 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.482 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.482 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:10:56.482 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.482 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.482 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:56.482 Test configuration: 00:10:56.482 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:56.482 SPDK_TEST_NVMF=1 00:10:56.482 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:56.482 SPDK_TEST_USDT=1 00:10:56.482 SPDK_TEST_NVMF_MDNS=1 00:10:56.482 SPDK_RUN_ASAN=1 00:10:56.482 SPDK_RUN_UBSAN=1 00:10:56.482 NET_TYPE=virt 00:10:56.482 SPDK_JSONRPC_GO_CLIENT=1 00:10:56.482 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:56.741 RUN_NIGHTLY=0 11:02:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.741 11:02:04 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:56.741 11:02:04 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.741 11:02:04 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.741 11:02:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.741 11:02:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.741 11:02:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.741 11:02:04 -- paths/export.sh@5 -- $ export PATH 00:10:56.741 11:02:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.741 11:02:04 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:10:56.741 11:02:04 -- common/autobuild_common.sh@435 -- $ date +%s 00:10:56.741 11:02:04 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713438124.XXXXXX 00:10:56.741 11:02:04 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713438124.eB0t7z 00:10:56.741 11:02:04 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:10:56.741 11:02:04 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:10:56.741 11:02:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:10:56.741 11:02:04 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:10:56.741 11:02:04 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:10:56.741 11:02:04 -- common/autobuild_common.sh@451 -- $ get_config_params 00:10:56.741 11:02:04 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:10:56.741 11:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:10:56.741 11:02:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:10:56.741 11:02:04 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:10:56.741 11:02:04 -- pm/common@17 -- $ local monitor 00:10:56.741 11:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:56.741 11:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5181 00:10:56.741 11:02:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:56.741 11:02:04 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5183 00:10:56.741 11:02:04 -- pm/common@21 -- $ date +%s 00:10:56.741 11:02:04 -- pm/common@26 -- $ sleep 1 00:10:56.741 11:02:04 -- pm/common@21 -- $ date +%s 00:10:56.741 11:02:04 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713438124 00:10:56.741 11:02:04 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713438124 00:10:56.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713438124_collect-vmstat.pm.log 00:10:56.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713438124_collect-cpu-load.pm.log 00:10:57.677 11:02:05 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:10:57.677 11:02:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:10:57.677 11:02:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:10:57.677 11:02:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:57.677 11:02:05 -- spdk/autobuild.sh@16 -- $ date -u 00:10:57.677 Thu Apr 18 11:02:05 AM UTC 2024 00:10:57.677 11:02:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:10:57.677 v24.05-pre-407-g65b4e17c6 00:10:57.677 11:02:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:10:57.677 11:02:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:10:57.677 11:02:05 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:10:57.677 11:02:05 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:57.677 11:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:10:57.677 ************************************ 00:10:57.677 START TEST asan 00:10:57.677 ************************************ 00:10:57.677 using asan 00:10:57.677 11:02:05 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:10:57.677 00:10:57.677 real 0m0.000s 00:10:57.677 user 0m0.000s 00:10:57.677 sys 0m0.000s 00:10:57.677 11:02:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:57.677 ************************************ 00:10:57.677 END TEST asan 00:10:57.677 11:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:10:57.677 ************************************ 00:10:57.935 11:02:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:10:57.935 11:02:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:10:57.935 11:02:05 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:10:57.935 11:02:05 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:10:57.935 11:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:10:57.935 ************************************ 00:10:57.935 START TEST ubsan 00:10:57.935 ************************************ 00:10:57.935 using ubsan 00:10:57.935 11:02:05 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:10:57.935 00:10:57.935 real 0m0.000s 00:10:57.935 user 0m0.000s 00:10:57.935 sys 0m0.000s 00:10:57.935 11:02:05 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:57.935 11:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:10:57.935 ************************************ 00:10:57.935 END TEST ubsan 00:10:57.935 ************************************ 00:10:57.935 11:02:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:57.935 11:02:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:57.935 11:02:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:57.935 11:02:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:10:57.935 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:57.935 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:58.503 Using 'verbs' RDMA provider 00:11:11.650 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:26.604 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:26.604 go version go1.21.1 linux/amd64 00:11:26.604 Creating mk/config.mk...done. 00:11:26.604 Creating mk/cc.flags.mk...done. 00:11:26.604 Type 'make' to build. 00:11:26.604 11:02:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:11:26.604 11:02:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:11:26.604 11:02:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:11:26.604 11:02:32 -- common/autotest_common.sh@10 -- $ set +x 00:11:26.604 ************************************ 00:11:26.604 START TEST make 00:11:26.604 ************************************ 00:11:26.604 11:02:32 -- common/autotest_common.sh@1111 -- $ make -j10 00:11:26.604 make[1]: Nothing to be done for 'all'. 00:11:36.578 The Meson build system 00:11:36.578 Version: 1.3.1 00:11:36.578 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:11:36.578 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:11:36.578 Build type: native build 00:11:36.578 Program cat found: YES (/usr/bin/cat) 00:11:36.578 Project name: DPDK 00:11:36.578 Project version: 23.11.0 00:11:36.578 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:36.578 C linker for the host machine: cc ld.bfd 2.39-16 00:11:36.578 Host machine cpu family: x86_64 00:11:36.578 Host machine cpu: x86_64 00:11:36.578 Message: ## Building in Developer Mode ## 00:11:36.578 Program pkg-config found: YES (/usr/bin/pkg-config) 00:11:36.578 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:11:36.578 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:11:36.578 Program python3 found: YES (/usr/bin/python3) 00:11:36.578 Program cat found: YES (/usr/bin/cat) 00:11:36.578 Compiler for C supports arguments -march=native: YES 00:11:36.578 Checking for size of "void *" : 8 00:11:36.578 Checking for size of "void *" : 8 (cached) 00:11:36.578 Library m found: YES 00:11:36.578 Library numa found: YES 00:11:36.578 Has header "numaif.h" : YES 00:11:36.578 Library fdt found: NO 00:11:36.578 Library execinfo found: NO 00:11:36.578 Has header "execinfo.h" : YES 00:11:36.578 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:36.578 Run-time dependency libarchive found: NO (tried pkgconfig) 00:11:36.578 Run-time dependency libbsd found: NO (tried pkgconfig) 00:11:36.578 Run-time dependency jansson found: NO (tried pkgconfig) 00:11:36.578 Run-time dependency openssl found: YES 3.0.9 00:11:36.578 Run-time dependency libpcap found: YES 1.10.4 00:11:36.578 Has header "pcap.h" with dependency libpcap: YES 00:11:36.578 Compiler for C supports arguments -Wcast-qual: YES 00:11:36.578 Compiler for C supports arguments -Wdeprecated: YES 00:11:36.578 Compiler for C supports arguments -Wformat: YES 00:11:36.578 Compiler for C supports arguments -Wformat-nonliteral: NO 00:11:36.578 Compiler for C supports arguments -Wformat-security: NO 00:11:36.578 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:36.578 Compiler for C supports arguments -Wmissing-prototypes: YES 00:11:36.578 Compiler for C supports arguments -Wnested-externs: YES 00:11:36.578 Compiler for C supports arguments -Wold-style-definition: YES 00:11:36.579 Compiler for C supports arguments -Wpointer-arith: YES 00:11:36.579 Compiler for C supports arguments -Wsign-compare: YES 00:11:36.579 Compiler for C supports arguments -Wstrict-prototypes: YES 00:11:36.579 Compiler for C supports arguments -Wundef: YES 00:11:36.579 Compiler for C supports arguments -Wwrite-strings: YES 00:11:36.579 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:11:36.579 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:11:36.579 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:36.579 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:11:36.579 Program objdump found: YES (/usr/bin/objdump) 00:11:36.579 Compiler for C supports arguments -mavx512f: YES 00:11:36.579 Checking if "AVX512 checking" compiles: YES 00:11:36.579 Fetching value of define "__SSE4_2__" : 1 00:11:36.579 Fetching value of define "__AES__" : 1 00:11:36.579 Fetching value of define "__AVX__" : 1 00:11:36.579 Fetching value of define "__AVX2__" : 1 00:11:36.579 Fetching value of define "__AVX512BW__" : (undefined) 00:11:36.579 Fetching value of define "__AVX512CD__" : (undefined) 00:11:36.579 Fetching value of define "__AVX512DQ__" : (undefined) 00:11:36.579 Fetching value of define "__AVX512F__" : (undefined) 00:11:36.579 Fetching value of define "__AVX512VL__" : (undefined) 00:11:36.579 Fetching value of define "__PCLMUL__" : 1 00:11:36.579 Fetching value of define "__RDRND__" : 1 00:11:36.579 Fetching value of define "__RDSEED__" : 1 00:11:36.579 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:11:36.579 Fetching value of define "__znver1__" : (undefined) 00:11:36.579 Fetching value of define "__znver2__" : (undefined) 00:11:36.579 Fetching value of define "__znver3__" : (undefined) 00:11:36.579 Fetching value of define "__znver4__" : (undefined) 00:11:36.579 Library asan found: YES 00:11:36.579 Compiler for C supports arguments -Wno-format-truncation: YES 00:11:36.579 Message: lib/log: Defining dependency "log" 00:11:36.579 Message: lib/kvargs: Defining dependency "kvargs" 00:11:36.579 Message: lib/telemetry: Defining dependency "telemetry" 00:11:36.579 Library rt found: YES 00:11:36.579 Checking for function "getentropy" : NO 00:11:36.579 Message: lib/eal: Defining dependency "eal" 00:11:36.579 Message: lib/ring: Defining dependency "ring" 00:11:36.579 Message: lib/rcu: Defining dependency "rcu" 00:11:36.579 Message: lib/mempool: Defining dependency "mempool" 00:11:36.579 Message: lib/mbuf: Defining dependency "mbuf" 00:11:36.579 Fetching value of define "__PCLMUL__" : 1 (cached) 00:11:36.579 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:11:36.579 Compiler for C supports arguments -mpclmul: YES 00:11:36.579 Compiler for C supports arguments -maes: YES 00:11:36.579 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:36.579 Compiler for C supports arguments -mavx512bw: YES 00:11:36.579 Compiler for C supports arguments -mavx512dq: YES 00:11:36.579 Compiler for C supports arguments -mavx512vl: YES 00:11:36.579 Compiler for C supports arguments -mvpclmulqdq: YES 00:11:36.579 Compiler for C supports arguments -mavx2: YES 00:11:36.579 Compiler for C supports arguments -mavx: YES 00:11:36.579 Message: lib/net: Defining dependency "net" 00:11:36.579 Message: lib/meter: Defining dependency "meter" 00:11:36.579 Message: lib/ethdev: Defining dependency "ethdev" 00:11:36.579 Message: lib/pci: Defining dependency "pci" 00:11:36.579 Message: lib/cmdline: Defining dependency "cmdline" 00:11:36.579 Message: lib/hash: Defining dependency "hash" 00:11:36.579 Message: lib/timer: Defining dependency "timer" 00:11:36.579 Message: lib/compressdev: Defining dependency "compressdev" 00:11:36.579 Message: lib/cryptodev: Defining dependency "cryptodev" 00:11:36.579 Message: lib/dmadev: Defining dependency "dmadev" 00:11:36.579 Compiler for C supports arguments -Wno-cast-qual: YES 00:11:36.579 Message: lib/power: Defining dependency "power" 00:11:36.579 Message: lib/reorder: Defining dependency "reorder" 00:11:36.579 Message: lib/security: Defining dependency "security" 00:11:36.579 Has header "linux/userfaultfd.h" : YES 00:11:36.579 Has header "linux/vduse.h" : YES 00:11:36.579 Message: lib/vhost: Defining dependency "vhost" 00:11:36.579 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:11:36.579 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:11:36.579 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:11:36.579 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:11:36.579 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:11:36.579 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:11:36.579 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:11:36.579 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:11:36.579 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:11:36.579 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:11:36.579 Program doxygen found: YES (/usr/bin/doxygen) 00:11:36.579 Configuring doxy-api-html.conf using configuration 00:11:36.579 Configuring doxy-api-man.conf using configuration 00:11:36.579 Program mandb found: YES (/usr/bin/mandb) 00:11:36.579 Program sphinx-build found: NO 00:11:36.579 Configuring rte_build_config.h using configuration 00:11:36.579 Message: 00:11:36.579 ================= 00:11:36.579 Applications Enabled 00:11:36.579 ================= 00:11:36.579 00:11:36.579 apps: 00:11:36.579 00:11:36.579 00:11:36.579 Message: 00:11:36.579 ================= 00:11:36.579 Libraries Enabled 00:11:36.579 ================= 00:11:36.579 00:11:36.579 libs: 00:11:36.579 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:11:36.579 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:11:36.579 cryptodev, dmadev, power, reorder, security, vhost, 00:11:36.579 00:11:36.579 Message: 00:11:36.579 =============== 00:11:36.579 Drivers Enabled 00:11:36.579 =============== 00:11:36.579 00:11:36.579 common: 00:11:36.579 00:11:36.579 bus: 00:11:36.579 pci, vdev, 00:11:36.579 mempool: 00:11:36.579 ring, 00:11:36.579 dma: 00:11:36.579 00:11:36.579 net: 00:11:36.579 00:11:36.579 crypto: 00:11:36.579 00:11:36.579 compress: 00:11:36.579 00:11:36.579 vdpa: 00:11:36.579 00:11:36.579 00:11:36.579 Message: 00:11:36.579 ================= 00:11:36.579 Content Skipped 00:11:36.579 ================= 00:11:36.579 00:11:36.579 apps: 00:11:36.579 dumpcap: explicitly disabled via build config 00:11:36.579 graph: explicitly disabled via build config 00:11:36.579 pdump: explicitly disabled via build config 00:11:36.579 proc-info: explicitly disabled via build config 00:11:36.579 test-acl: explicitly disabled via build config 00:11:36.579 test-bbdev: explicitly disabled via build config 00:11:36.579 test-cmdline: explicitly disabled via build config 00:11:36.579 test-compress-perf: explicitly disabled via build config 00:11:36.579 test-crypto-perf: explicitly disabled via build config 00:11:36.579 test-dma-perf: explicitly disabled via build config 00:11:36.579 test-eventdev: explicitly disabled via build config 00:11:36.579 test-fib: explicitly disabled via build config 00:11:36.579 test-flow-perf: explicitly disabled via build config 00:11:36.579 test-gpudev: explicitly disabled via build config 00:11:36.579 test-mldev: explicitly disabled via build config 00:11:36.579 test-pipeline: explicitly disabled via build config 00:11:36.579 test-pmd: explicitly disabled via build config 00:11:36.579 test-regex: explicitly disabled via build config 00:11:36.579 test-sad: explicitly disabled via build config 00:11:36.579 test-security-perf: explicitly disabled via build config 00:11:36.579 00:11:36.579 libs: 00:11:36.579 metrics: explicitly disabled via build config 00:11:36.579 acl: explicitly disabled via build config 00:11:36.579 bbdev: explicitly disabled via build config 00:11:36.579 bitratestats: explicitly disabled via build config 00:11:36.579 bpf: explicitly disabled via build config 00:11:36.579 cfgfile: explicitly disabled via build config 00:11:36.579 distributor: explicitly disabled via build config 00:11:36.579 efd: explicitly disabled via build config 00:11:36.579 eventdev: explicitly disabled via build config 00:11:36.579 dispatcher: explicitly disabled via build config 00:11:36.579 gpudev: explicitly disabled via build config 00:11:36.579 gro: explicitly disabled via build config 00:11:36.579 gso: explicitly disabled via build config 00:11:36.579 ip_frag: explicitly disabled via build config 00:11:36.579 jobstats: explicitly disabled via build config 00:11:36.579 latencystats: explicitly disabled via build config 00:11:36.579 lpm: explicitly disabled via build config 00:11:36.579 member: explicitly disabled via build config 00:11:36.579 pcapng: explicitly disabled via build config 00:11:36.579 rawdev: explicitly disabled via build config 00:11:36.579 regexdev: explicitly disabled via build config 00:11:36.579 mldev: explicitly disabled via build config 00:11:36.579 rib: explicitly disabled via build config 00:11:36.579 sched: explicitly disabled via build config 00:11:36.579 stack: explicitly disabled via build config 00:11:36.579 ipsec: explicitly disabled via build config 00:11:36.579 pdcp: explicitly disabled via build config 00:11:36.579 fib: explicitly disabled via build config 00:11:36.579 port: explicitly disabled via build config 00:11:36.579 pdump: explicitly disabled via build config 00:11:36.579 table: explicitly disabled via build config 00:11:36.579 pipeline: explicitly disabled via build config 00:11:36.579 graph: explicitly disabled via build config 00:11:36.579 node: explicitly disabled via build config 00:11:36.579 00:11:36.579 drivers: 00:11:36.579 common/cpt: not in enabled drivers build config 00:11:36.579 common/dpaax: not in enabled drivers build config 00:11:36.579 common/iavf: not in enabled drivers build config 00:11:36.579 common/idpf: not in enabled drivers build config 00:11:36.579 common/mvep: not in enabled drivers build config 00:11:36.579 common/octeontx: not in enabled drivers build config 00:11:36.579 bus/auxiliary: not in enabled drivers build config 00:11:36.579 bus/cdx: not in enabled drivers build config 00:11:36.579 bus/dpaa: not in enabled drivers build config 00:11:36.579 bus/fslmc: not in enabled drivers build config 00:11:36.580 bus/ifpga: not in enabled drivers build config 00:11:36.580 bus/platform: not in enabled drivers build config 00:11:36.580 bus/vmbus: not in enabled drivers build config 00:11:36.580 common/cnxk: not in enabled drivers build config 00:11:36.580 common/mlx5: not in enabled drivers build config 00:11:36.580 common/nfp: not in enabled drivers build config 00:11:36.580 common/qat: not in enabled drivers build config 00:11:36.580 common/sfc_efx: not in enabled drivers build config 00:11:36.580 mempool/bucket: not in enabled drivers build config 00:11:36.580 mempool/cnxk: not in enabled drivers build config 00:11:36.580 mempool/dpaa: not in enabled drivers build config 00:11:36.580 mempool/dpaa2: not in enabled drivers build config 00:11:36.580 mempool/octeontx: not in enabled drivers build config 00:11:36.580 mempool/stack: not in enabled drivers build config 00:11:36.580 dma/cnxk: not in enabled drivers build config 00:11:36.580 dma/dpaa: not in enabled drivers build config 00:11:36.580 dma/dpaa2: not in enabled drivers build config 00:11:36.580 dma/hisilicon: not in enabled drivers build config 00:11:36.580 dma/idxd: not in enabled drivers build config 00:11:36.580 dma/ioat: not in enabled drivers build config 00:11:36.580 dma/skeleton: not in enabled drivers build config 00:11:36.580 net/af_packet: not in enabled drivers build config 00:11:36.580 net/af_xdp: not in enabled drivers build config 00:11:36.580 net/ark: not in enabled drivers build config 00:11:36.580 net/atlantic: not in enabled drivers build config 00:11:36.580 net/avp: not in enabled drivers build config 00:11:36.580 net/axgbe: not in enabled drivers build config 00:11:36.580 net/bnx2x: not in enabled drivers build config 00:11:36.580 net/bnxt: not in enabled drivers build config 00:11:36.580 net/bonding: not in enabled drivers build config 00:11:36.580 net/cnxk: not in enabled drivers build config 00:11:36.580 net/cpfl: not in enabled drivers build config 00:11:36.580 net/cxgbe: not in enabled drivers build config 00:11:36.580 net/dpaa: not in enabled drivers build config 00:11:36.580 net/dpaa2: not in enabled drivers build config 00:11:36.580 net/e1000: not in enabled drivers build config 00:11:36.580 net/ena: not in enabled drivers build config 00:11:36.580 net/enetc: not in enabled drivers build config 00:11:36.580 net/enetfec: not in enabled drivers build config 00:11:36.580 net/enic: not in enabled drivers build config 00:11:36.580 net/failsafe: not in enabled drivers build config 00:11:36.580 net/fm10k: not in enabled drivers build config 00:11:36.580 net/gve: not in enabled drivers build config 00:11:36.580 net/hinic: not in enabled drivers build config 00:11:36.580 net/hns3: not in enabled drivers build config 00:11:36.580 net/i40e: not in enabled drivers build config 00:11:36.580 net/iavf: not in enabled drivers build config 00:11:36.580 net/ice: not in enabled drivers build config 00:11:36.580 net/idpf: not in enabled drivers build config 00:11:36.580 net/igc: not in enabled drivers build config 00:11:36.580 net/ionic: not in enabled drivers build config 00:11:36.580 net/ipn3ke: not in enabled drivers build config 00:11:36.580 net/ixgbe: not in enabled drivers build config 00:11:36.580 net/mana: not in enabled drivers build config 00:11:36.580 net/memif: not in enabled drivers build config 00:11:36.580 net/mlx4: not in enabled drivers build config 00:11:36.580 net/mlx5: not in enabled drivers build config 00:11:36.580 net/mvneta: not in enabled drivers build config 00:11:36.580 net/mvpp2: not in enabled drivers build config 00:11:36.580 net/netvsc: not in enabled drivers build config 00:11:36.580 net/nfb: not in enabled drivers build config 00:11:36.580 net/nfp: not in enabled drivers build config 00:11:36.580 net/ngbe: not in enabled drivers build config 00:11:36.580 net/null: not in enabled drivers build config 00:11:36.580 net/octeontx: not in enabled drivers build config 00:11:36.580 net/octeon_ep: not in enabled drivers build config 00:11:36.580 net/pcap: not in enabled drivers build config 00:11:36.580 net/pfe: not in enabled drivers build config 00:11:36.580 net/qede: not in enabled drivers build config 00:11:36.580 net/ring: not in enabled drivers build config 00:11:36.580 net/sfc: not in enabled drivers build config 00:11:36.580 net/softnic: not in enabled drivers build config 00:11:36.580 net/tap: not in enabled drivers build config 00:11:36.580 net/thunderx: not in enabled drivers build config 00:11:36.580 net/txgbe: not in enabled drivers build config 00:11:36.580 net/vdev_netvsc: not in enabled drivers build config 00:11:36.580 net/vhost: not in enabled drivers build config 00:11:36.580 net/virtio: not in enabled drivers build config 00:11:36.580 net/vmxnet3: not in enabled drivers build config 00:11:36.580 raw/*: missing internal dependency, "rawdev" 00:11:36.580 crypto/armv8: not in enabled drivers build config 00:11:36.580 crypto/bcmfs: not in enabled drivers build config 00:11:36.580 crypto/caam_jr: not in enabled drivers build config 00:11:36.580 crypto/ccp: not in enabled drivers build config 00:11:36.580 crypto/cnxk: not in enabled drivers build config 00:11:36.580 crypto/dpaa_sec: not in enabled drivers build config 00:11:36.580 crypto/dpaa2_sec: not in enabled drivers build config 00:11:36.580 crypto/ipsec_mb: not in enabled drivers build config 00:11:36.580 crypto/mlx5: not in enabled drivers build config 00:11:36.580 crypto/mvsam: not in enabled drivers build config 00:11:36.580 crypto/nitrox: not in enabled drivers build config 00:11:36.580 crypto/null: not in enabled drivers build config 00:11:36.580 crypto/octeontx: not in enabled drivers build config 00:11:36.580 crypto/openssl: not in enabled drivers build config 00:11:36.580 crypto/scheduler: not in enabled drivers build config 00:11:36.580 crypto/uadk: not in enabled drivers build config 00:11:36.580 crypto/virtio: not in enabled drivers build config 00:11:36.580 compress/isal: not in enabled drivers build config 00:11:36.580 compress/mlx5: not in enabled drivers build config 00:11:36.580 compress/octeontx: not in enabled drivers build config 00:11:36.580 compress/zlib: not in enabled drivers build config 00:11:36.580 regex/*: missing internal dependency, "regexdev" 00:11:36.580 ml/*: missing internal dependency, "mldev" 00:11:36.580 vdpa/ifc: not in enabled drivers build config 00:11:36.580 vdpa/mlx5: not in enabled drivers build config 00:11:36.580 vdpa/nfp: not in enabled drivers build config 00:11:36.580 vdpa/sfc: not in enabled drivers build config 00:11:36.580 event/*: missing internal dependency, "eventdev" 00:11:36.580 baseband/*: missing internal dependency, "bbdev" 00:11:36.580 gpu/*: missing internal dependency, "gpudev" 00:11:36.580 00:11:36.580 00:11:36.838 Build targets in project: 85 00:11:36.838 00:11:36.838 DPDK 23.11.0 00:11:36.838 00:11:36.838 User defined options 00:11:36.838 buildtype : debug 00:11:36.838 default_library : shared 00:11:36.838 libdir : lib 00:11:36.838 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:36.838 b_sanitize : address 00:11:36.838 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:11:36.838 c_link_args : 00:11:36.838 cpu_instruction_set: native 00:11:36.838 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:11:36.838 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:11:36.838 enable_docs : false 00:11:36.838 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:11:36.838 enable_kmods : false 00:11:36.838 tests : false 00:11:36.838 00:11:36.838 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:37.405 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:11:37.405 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:11:37.405 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:11:37.405 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:11:37.405 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:11:37.663 [5/265] Linking static target lib/librte_kvargs.a 00:11:37.663 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:11:37.663 [7/265] Linking static target lib/librte_log.a 00:11:37.663 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:11:37.663 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:11:37.663 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:11:38.230 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.230 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:11:38.230 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:11:38.488 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:11:38.488 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:11:38.488 [16/265] Linking static target lib/librte_telemetry.a 00:11:38.488 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.488 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:11:38.488 [19/265] Linking target lib/librte_log.so.24.0 00:11:38.488 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:11:38.746 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:11:38.746 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:11:38.746 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:11:39.004 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:11:39.004 [25/265] Linking target lib/librte_kvargs.so.24.0 00:11:39.004 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:11:39.262 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:11:39.262 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:11:39.262 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.520 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:11:39.520 [31/265] Linking target lib/librte_telemetry.so.24.0 00:11:39.520 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:11:39.520 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:11:39.778 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:11:39.778 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:11:39.778 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:11:39.778 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:11:39.778 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:11:40.036 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:11:40.036 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:11:40.036 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:11:40.036 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:11:40.295 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:11:40.295 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:11:40.295 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:11:40.555 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:11:40.555 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:11:40.813 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:11:40.813 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:11:41.071 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:11:41.071 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:11:41.071 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:11:41.071 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:11:41.071 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:11:41.071 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:11:41.071 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:11:41.329 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:11:41.329 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:11:41.587 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:11:41.587 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:11:41.845 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:11:41.845 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:11:41.845 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:11:41.845 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:11:41.845 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:11:42.104 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:11:42.104 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:11:42.104 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:11:42.361 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:11:42.361 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:11:42.620 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:11:42.620 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:11:42.620 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:11:42.620 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:11:42.620 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:11:42.620 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:11:42.620 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:11:43.187 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:11:43.187 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:11:43.187 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:11:43.446 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:11:43.446 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:11:43.446 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:11:43.704 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:11:43.704 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:11:43.704 [86/265] Linking static target lib/librte_eal.a 00:11:43.704 [87/265] Linking static target lib/librte_ring.a 00:11:43.962 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:11:44.220 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:11:44.220 [90/265] Linking static target lib/librte_rcu.a 00:11:44.220 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:11:44.220 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:11:44.220 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.478 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:11:44.478 [95/265] Linking static target lib/librte_mempool.a 00:11:44.478 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:11:44.478 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:11:44.737 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.995 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:11:44.995 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:11:44.995 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:11:45.254 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:11:45.512 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:11:45.512 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:11:45.770 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:11:45.770 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:11:45.770 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:11:45.770 [108/265] Linking static target lib/librte_net.a 00:11:45.770 [109/265] Linking static target lib/librte_meter.a 00:11:45.770 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:11:45.770 [111/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:11:46.029 [112/265] Linking static target lib/librte_mbuf.a 00:11:46.288 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:11:46.288 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.288 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.288 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:11:46.288 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:11:46.546 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:11:47.115 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:11:47.115 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:47.115 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:11:47.377 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:11:47.377 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:11:47.377 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:11:47.377 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:11:47.377 [126/265] Linking static target lib/librte_pci.a 00:11:47.377 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:11:47.635 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:11:47.635 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:11:47.635 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:11:47.893 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:11:47.893 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:11:47.893 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:47.893 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:11:47.893 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:11:47.893 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:11:47.893 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:11:47.893 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:11:48.151 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:11:48.151 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:11:48.151 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:11:48.151 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:11:48.409 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:11:48.667 [144/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:11:48.667 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:11:48.667 [146/265] Linking static target lib/librte_timer.a 00:11:48.667 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:11:48.667 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:11:48.667 [149/265] Linking static target lib/librte_cmdline.a 00:11:48.925 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:11:48.925 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:11:49.183 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:11:49.183 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:11:49.183 [154/265] Linking static target lib/librte_ethdev.a 00:11:49.442 [155/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:49.442 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:11:49.442 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:11:49.442 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:49.442 [159/265] Linking static target lib/librte_compressdev.a 00:11:49.700 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:49.700 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:11:49.700 [162/265] Linking static target lib/librte_hash.a 00:11:49.958 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:49.958 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:49.958 [165/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:49.958 [166/265] Linking static target lib/librte_dmadev.a 00:11:50.217 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:50.570 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:50.570 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:50.570 [170/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.570 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.829 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:50.829 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.829 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:51.088 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:51.088 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:51.088 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:51.088 [178/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:51.088 [179/265] Linking static target lib/librte_cryptodev.a 00:11:51.346 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:51.604 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:51.604 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:51.604 [183/265] Linking static target lib/librte_power.a 00:11:51.863 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:51.863 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:51.863 [186/265] Linking static target lib/librte_reorder.a 00:11:52.121 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:52.121 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:52.121 [189/265] Linking static target lib/librte_security.a 00:11:52.121 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:52.378 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:52.945 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:52.945 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:52.945 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:52.945 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:53.203 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:53.203 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:53.460 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:53.460 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:53.718 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:53.718 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:53.718 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:53.718 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:53.976 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:54.234 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:54.234 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:54.234 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:54.234 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:54.492 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:54.492 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:54.492 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:54.492 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:54.492 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:54.492 [214/265] Linking static target drivers/librte_bus_vdev.a 00:11:54.492 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:54.492 [216/265] Linking static target drivers/librte_bus_pci.a 00:11:54.492 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:54.492 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:54.750 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:54.750 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:54.750 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:54.750 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:54.750 [223/265] Linking static target drivers/librte_mempool_ring.a 00:11:55.007 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:55.571 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:55.571 [226/265] Linking target lib/librte_eal.so.24.0 00:11:55.828 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:55.828 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:11:55.828 [229/265] Linking target lib/librte_pci.so.24.0 00:11:55.828 [230/265] Linking target lib/librte_ring.so.24.0 00:11:55.828 [231/265] Linking target lib/librte_meter.so.24.0 00:11:55.828 [232/265] Linking target lib/librte_dmadev.so.24.0 00:11:55.828 [233/265] Linking target lib/librte_timer.so.24.0 00:11:55.828 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:11:56.087 [235/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:11:56.087 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:11:56.087 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:11:56.087 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:11:56.087 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:11:56.087 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:11:56.087 [241/265] Linking target lib/librte_mempool.so.24.0 00:11:56.087 [242/265] Linking target lib/librte_rcu.so.24.0 00:11:56.345 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:11:56.345 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:11:56.345 [245/265] Linking target lib/librte_mbuf.so.24.0 00:11:56.345 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:11:56.345 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:11:56.603 [248/265] Linking target lib/librte_compressdev.so.24.0 00:11:56.603 [249/265] Linking target lib/librte_net.so.24.0 00:11:56.603 [250/265] Linking target lib/librte_reorder.so.24.0 00:11:56.603 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:11:56.603 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:11:56.603 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:11:56.603 [254/265] Linking target lib/librte_hash.so.24.0 00:11:56.603 [255/265] Linking target lib/librte_security.so.24.0 00:11:56.603 [256/265] Linking target lib/librte_cmdline.so.24.0 00:11:56.862 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:11:56.862 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:56.862 [259/265] Linking target lib/librte_ethdev.so.24.0 00:11:57.120 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:11:57.120 [261/265] Linking target lib/librte_power.so.24.0 00:11:59.650 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:59.650 [263/265] Linking static target lib/librte_vhost.a 00:12:01.027 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:01.285 [265/265] Linking target lib/librte_vhost.so.24.0 00:12:01.285 INFO: autodetecting backend as ninja 00:12:01.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:12:02.221 CC lib/ut/ut.o 00:12:02.221 CC lib/log/log.o 00:12:02.221 CC lib/log/log_flags.o 00:12:02.221 CC lib/log/log_deprecated.o 00:12:02.221 CC lib/ut_mock/mock.o 00:12:02.479 LIB libspdk_ut_mock.a 00:12:02.479 LIB libspdk_log.a 00:12:02.479 LIB libspdk_ut.a 00:12:02.479 SO libspdk_ut_mock.so.6.0 00:12:02.479 SO libspdk_ut.so.2.0 00:12:02.479 SO libspdk_log.so.7.0 00:12:02.737 SYMLINK libspdk_ut_mock.so 00:12:02.738 SYMLINK libspdk_ut.so 00:12:02.738 SYMLINK libspdk_log.so 00:12:02.996 CXX lib/trace_parser/trace.o 00:12:02.996 CC lib/dma/dma.o 00:12:02.996 CC lib/util/base64.o 00:12:02.996 CC lib/util/bit_array.o 00:12:02.996 CC lib/util/cpuset.o 00:12:02.996 CC lib/ioat/ioat.o 00:12:02.996 CC lib/util/crc16.o 00:12:02.996 CC lib/util/crc32.o 00:12:02.996 CC lib/util/crc32c.o 00:12:02.996 CC lib/vfio_user/host/vfio_user_pci.o 00:12:02.996 CC lib/util/crc32_ieee.o 00:12:02.996 CC lib/util/crc64.o 00:12:02.996 CC lib/vfio_user/host/vfio_user.o 00:12:03.255 LIB libspdk_dma.a 00:12:03.255 CC lib/util/dif.o 00:12:03.255 CC lib/util/fd.o 00:12:03.255 SO libspdk_dma.so.4.0 00:12:03.255 CC lib/util/file.o 00:12:03.255 CC lib/util/hexlify.o 00:12:03.255 CC lib/util/iov.o 00:12:03.255 SYMLINK libspdk_dma.so 00:12:03.255 LIB libspdk_ioat.a 00:12:03.255 CC lib/util/math.o 00:12:03.255 SO libspdk_ioat.so.7.0 00:12:03.255 CC lib/util/pipe.o 00:12:03.255 LIB libspdk_vfio_user.a 00:12:03.255 CC lib/util/strerror_tls.o 00:12:03.255 CC lib/util/string.o 00:12:03.255 SYMLINK libspdk_ioat.so 00:12:03.255 CC lib/util/uuid.o 00:12:03.514 SO libspdk_vfio_user.so.5.0 00:12:03.514 CC lib/util/fd_group.o 00:12:03.514 CC lib/util/xor.o 00:12:03.514 SYMLINK libspdk_vfio_user.so 00:12:03.514 CC lib/util/zipf.o 00:12:04.081 LIB libspdk_util.a 00:12:04.081 SO libspdk_util.so.9.0 00:12:04.081 LIB libspdk_trace_parser.a 00:12:04.081 SO libspdk_trace_parser.so.5.0 00:12:04.081 SYMLINK libspdk_util.so 00:12:04.339 SYMLINK libspdk_trace_parser.so 00:12:04.339 CC lib/rdma/common.o 00:12:04.339 CC lib/rdma/rdma_verbs.o 00:12:04.339 CC lib/env_dpdk/env.o 00:12:04.339 CC lib/vmd/vmd.o 00:12:04.339 CC lib/idxd/idxd.o 00:12:04.339 CC lib/vmd/led.o 00:12:04.339 CC lib/env_dpdk/memory.o 00:12:04.339 CC lib/env_dpdk/pci.o 00:12:04.339 CC lib/json/json_parse.o 00:12:04.339 CC lib/conf/conf.o 00:12:04.598 CC lib/json/json_util.o 00:12:04.598 CC lib/json/json_write.o 00:12:04.598 CC lib/idxd/idxd_user.o 00:12:04.856 LIB libspdk_conf.a 00:12:04.856 SO libspdk_conf.so.6.0 00:12:04.856 LIB libspdk_rdma.a 00:12:04.856 SO libspdk_rdma.so.6.0 00:12:04.856 SYMLINK libspdk_conf.so 00:12:04.856 CC lib/env_dpdk/init.o 00:12:04.856 CC lib/env_dpdk/threads.o 00:12:04.856 SYMLINK libspdk_rdma.so 00:12:04.856 CC lib/env_dpdk/pci_ioat.o 00:12:04.856 CC lib/env_dpdk/pci_virtio.o 00:12:04.856 CC lib/env_dpdk/pci_vmd.o 00:12:05.113 LIB libspdk_json.a 00:12:05.113 CC lib/env_dpdk/pci_idxd.o 00:12:05.113 CC lib/env_dpdk/pci_event.o 00:12:05.113 CC lib/env_dpdk/sigbus_handler.o 00:12:05.113 SO libspdk_json.so.6.0 00:12:05.113 CC lib/env_dpdk/pci_dpdk.o 00:12:05.113 LIB libspdk_idxd.a 00:12:05.113 SYMLINK libspdk_json.so 00:12:05.113 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:05.113 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:05.113 SO libspdk_idxd.so.12.0 00:12:05.370 SYMLINK libspdk_idxd.so 00:12:05.370 LIB libspdk_vmd.a 00:12:05.370 SO libspdk_vmd.so.6.0 00:12:05.370 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:05.370 CC lib/jsonrpc/jsonrpc_server.o 00:12:05.370 CC lib/jsonrpc/jsonrpc_client.o 00:12:05.370 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:05.370 SYMLINK libspdk_vmd.so 00:12:05.628 LIB libspdk_jsonrpc.a 00:12:05.886 SO libspdk_jsonrpc.so.6.0 00:12:05.886 SYMLINK libspdk_jsonrpc.so 00:12:06.144 CC lib/rpc/rpc.o 00:12:06.144 LIB libspdk_env_dpdk.a 00:12:06.403 LIB libspdk_rpc.a 00:12:06.403 SO libspdk_rpc.so.6.0 00:12:06.403 SO libspdk_env_dpdk.so.14.0 00:12:06.403 SYMLINK libspdk_rpc.so 00:12:06.661 SYMLINK libspdk_env_dpdk.so 00:12:06.661 CC lib/keyring/keyring.o 00:12:06.661 CC lib/keyring/keyring_rpc.o 00:12:06.661 CC lib/notify/notify.o 00:12:06.661 CC lib/notify/notify_rpc.o 00:12:06.661 CC lib/trace/trace.o 00:12:06.661 CC lib/trace/trace_flags.o 00:12:06.661 CC lib/trace/trace_rpc.o 00:12:06.920 LIB libspdk_notify.a 00:12:06.920 SO libspdk_notify.so.6.0 00:12:06.920 LIB libspdk_keyring.a 00:12:06.920 LIB libspdk_trace.a 00:12:06.920 SYMLINK libspdk_notify.so 00:12:06.920 SO libspdk_keyring.so.1.0 00:12:06.920 SO libspdk_trace.so.10.0 00:12:07.178 SYMLINK libspdk_keyring.so 00:12:07.178 SYMLINK libspdk_trace.so 00:12:07.436 CC lib/thread/thread.o 00:12:07.436 CC lib/thread/iobuf.o 00:12:07.436 CC lib/sock/sock.o 00:12:07.436 CC lib/sock/sock_rpc.o 00:12:08.003 LIB libspdk_sock.a 00:12:08.003 SO libspdk_sock.so.9.0 00:12:08.003 SYMLINK libspdk_sock.so 00:12:08.261 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:08.261 CC lib/nvme/nvme_ns_cmd.o 00:12:08.261 CC lib/nvme/nvme_ns.o 00:12:08.261 CC lib/nvme/nvme_ctrlr.o 00:12:08.261 CC lib/nvme/nvme_fabric.o 00:12:08.261 CC lib/nvme/nvme_pcie_common.o 00:12:08.261 CC lib/nvme/nvme_pcie.o 00:12:08.261 CC lib/nvme/nvme_qpair.o 00:12:08.261 CC lib/nvme/nvme.o 00:12:09.197 CC lib/nvme/nvme_quirks.o 00:12:09.197 CC lib/nvme/nvme_transport.o 00:12:09.197 CC lib/nvme/nvme_discovery.o 00:12:09.197 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:09.455 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:09.455 CC lib/nvme/nvme_tcp.o 00:12:09.455 CC lib/nvme/nvme_opal.o 00:12:09.455 LIB libspdk_thread.a 00:12:09.455 SO libspdk_thread.so.10.0 00:12:09.455 CC lib/nvme/nvme_io_msg.o 00:12:09.714 SYMLINK libspdk_thread.so 00:12:09.714 CC lib/nvme/nvme_poll_group.o 00:12:09.714 CC lib/nvme/nvme_zns.o 00:12:09.714 CC lib/nvme/nvme_stubs.o 00:12:09.973 CC lib/nvme/nvme_auth.o 00:12:09.973 CC lib/accel/accel.o 00:12:10.231 CC lib/accel/accel_rpc.o 00:12:10.231 CC lib/blob/blobstore.o 00:12:10.231 CC lib/blob/request.o 00:12:10.231 CC lib/nvme/nvme_cuse.o 00:12:10.231 CC lib/accel/accel_sw.o 00:12:10.528 CC lib/nvme/nvme_rdma.o 00:12:10.528 CC lib/blob/zeroes.o 00:12:10.789 CC lib/blob/blob_bs_dev.o 00:12:10.789 CC lib/init/json_config.o 00:12:10.789 CC lib/virtio/virtio.o 00:12:10.789 CC lib/virtio/virtio_vhost_user.o 00:12:11.049 CC lib/init/subsystem.o 00:12:11.049 CC lib/virtio/virtio_vfio_user.o 00:12:11.049 CC lib/virtio/virtio_pci.o 00:12:11.308 CC lib/init/subsystem_rpc.o 00:12:11.308 CC lib/init/rpc.o 00:12:11.308 LIB libspdk_accel.a 00:12:11.308 SO libspdk_accel.so.15.0 00:12:11.565 LIB libspdk_init.a 00:12:11.565 SYMLINK libspdk_accel.so 00:12:11.565 SO libspdk_init.so.5.0 00:12:11.565 LIB libspdk_virtio.a 00:12:11.565 SO libspdk_virtio.so.7.0 00:12:11.565 SYMLINK libspdk_init.so 00:12:11.565 SYMLINK libspdk_virtio.so 00:12:11.822 CC lib/bdev/bdev_rpc.o 00:12:11.822 CC lib/bdev/bdev.o 00:12:11.822 CC lib/bdev/bdev_zone.o 00:12:11.822 CC lib/bdev/part.o 00:12:11.822 CC lib/bdev/scsi_nvme.o 00:12:11.822 CC lib/event/reactor.o 00:12:11.822 CC lib/event/app.o 00:12:11.822 CC lib/event/log_rpc.o 00:12:11.822 CC lib/event/app_rpc.o 00:12:12.080 CC lib/event/scheduler_static.o 00:12:12.080 LIB libspdk_nvme.a 00:12:12.338 SO libspdk_nvme.so.13.0 00:12:12.338 LIB libspdk_event.a 00:12:12.338 SO libspdk_event.so.13.0 00:12:12.595 SYMLINK libspdk_event.so 00:12:12.595 SYMLINK libspdk_nvme.so 00:12:14.500 LIB libspdk_blob.a 00:12:14.500 SO libspdk_blob.so.11.0 00:12:14.500 SYMLINK libspdk_blob.so 00:12:14.758 CC lib/blobfs/blobfs.o 00:12:14.758 CC lib/blobfs/tree.o 00:12:14.758 CC lib/lvol/lvol.o 00:12:15.016 LIB libspdk_bdev.a 00:12:15.337 SO libspdk_bdev.so.15.0 00:12:15.338 SYMLINK libspdk_bdev.so 00:12:15.596 CC lib/ublk/ublk.o 00:12:15.596 CC lib/ublk/ublk_rpc.o 00:12:15.596 CC lib/ftl/ftl_init.o 00:12:15.596 CC lib/ftl/ftl_layout.o 00:12:15.596 CC lib/nvmf/ctrlr.o 00:12:15.596 CC lib/ftl/ftl_core.o 00:12:15.596 CC lib/scsi/dev.o 00:12:15.596 CC lib/nbd/nbd.o 00:12:15.853 LIB libspdk_blobfs.a 00:12:15.853 CC lib/ftl/ftl_debug.o 00:12:15.853 SO libspdk_blobfs.so.10.0 00:12:15.853 CC lib/nbd/nbd_rpc.o 00:12:15.853 CC lib/scsi/lun.o 00:12:15.853 SYMLINK libspdk_blobfs.so 00:12:15.853 CC lib/nvmf/ctrlr_discovery.o 00:12:16.111 LIB libspdk_lvol.a 00:12:16.111 CC lib/ftl/ftl_io.o 00:12:16.111 SO libspdk_lvol.so.10.0 00:12:16.111 CC lib/scsi/port.o 00:12:16.111 CC lib/scsi/scsi.o 00:12:16.111 CC lib/scsi/scsi_bdev.o 00:12:16.111 SYMLINK libspdk_lvol.so 00:12:16.111 CC lib/scsi/scsi_pr.o 00:12:16.111 LIB libspdk_nbd.a 00:12:16.111 SO libspdk_nbd.so.7.0 00:12:16.368 CC lib/scsi/scsi_rpc.o 00:12:16.368 CC lib/ftl/ftl_sb.o 00:12:16.368 CC lib/scsi/task.o 00:12:16.368 SYMLINK libspdk_nbd.so 00:12:16.368 CC lib/nvmf/ctrlr_bdev.o 00:12:16.368 CC lib/nvmf/subsystem.o 00:12:16.368 CC lib/nvmf/nvmf.o 00:12:16.368 LIB libspdk_ublk.a 00:12:16.368 SO libspdk_ublk.so.3.0 00:12:16.626 CC lib/ftl/ftl_l2p.o 00:12:16.626 CC lib/nvmf/nvmf_rpc.o 00:12:16.626 CC lib/nvmf/transport.o 00:12:16.626 SYMLINK libspdk_ublk.so 00:12:16.626 CC lib/ftl/ftl_l2p_flat.o 00:12:16.626 CC lib/ftl/ftl_nv_cache.o 00:12:16.626 LIB libspdk_scsi.a 00:12:16.626 CC lib/ftl/ftl_band.o 00:12:16.884 SO libspdk_scsi.so.9.0 00:12:16.884 CC lib/ftl/ftl_band_ops.o 00:12:16.884 SYMLINK libspdk_scsi.so 00:12:16.884 CC lib/ftl/ftl_writer.o 00:12:17.141 CC lib/nvmf/tcp.o 00:12:17.141 CC lib/ftl/ftl_rq.o 00:12:17.399 CC lib/iscsi/conn.o 00:12:17.399 CC lib/vhost/vhost.o 00:12:17.399 CC lib/vhost/vhost_rpc.o 00:12:17.399 CC lib/iscsi/init_grp.o 00:12:17.658 CC lib/ftl/ftl_reloc.o 00:12:17.658 CC lib/nvmf/rdma.o 00:12:17.658 CC lib/vhost/vhost_scsi.o 00:12:17.658 CC lib/vhost/vhost_blk.o 00:12:17.916 CC lib/vhost/rte_vhost_user.o 00:12:17.916 CC lib/ftl/ftl_l2p_cache.o 00:12:17.916 CC lib/ftl/ftl_p2l.o 00:12:18.174 CC lib/ftl/mngt/ftl_mngt.o 00:12:18.174 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:12:18.174 CC lib/iscsi/iscsi.o 00:12:18.431 CC lib/iscsi/md5.o 00:12:18.431 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:12:18.431 CC lib/ftl/mngt/ftl_mngt_startup.o 00:12:18.688 CC lib/iscsi/param.o 00:12:18.688 CC lib/iscsi/portal_grp.o 00:12:18.688 CC lib/iscsi/tgt_node.o 00:12:18.688 CC lib/ftl/mngt/ftl_mngt_md.o 00:12:18.688 CC lib/ftl/mngt/ftl_mngt_misc.o 00:12:18.945 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:12:18.945 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:12:18.945 CC lib/iscsi/iscsi_subsystem.o 00:12:18.945 CC lib/ftl/mngt/ftl_mngt_band.o 00:12:18.945 LIB libspdk_vhost.a 00:12:19.203 CC lib/iscsi/iscsi_rpc.o 00:12:19.203 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:12:19.203 CC lib/iscsi/task.o 00:12:19.203 SO libspdk_vhost.so.8.0 00:12:19.203 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:12:19.203 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:12:19.203 SYMLINK libspdk_vhost.so 00:12:19.203 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:12:19.461 CC lib/ftl/utils/ftl_conf.o 00:12:19.461 CC lib/ftl/utils/ftl_md.o 00:12:19.461 CC lib/ftl/utils/ftl_mempool.o 00:12:19.461 CC lib/ftl/utils/ftl_bitmap.o 00:12:19.461 CC lib/ftl/utils/ftl_property.o 00:12:19.461 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:12:19.461 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:12:19.719 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:12:19.719 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:12:19.719 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:12:19.719 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:12:19.719 CC lib/ftl/upgrade/ftl_sb_v3.o 00:12:19.719 CC lib/ftl/upgrade/ftl_sb_v5.o 00:12:19.719 CC lib/ftl/nvc/ftl_nvc_dev.o 00:12:19.719 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:12:19.978 CC lib/ftl/base/ftl_base_dev.o 00:12:19.978 CC lib/ftl/base/ftl_base_bdev.o 00:12:19.978 CC lib/ftl/ftl_trace.o 00:12:19.978 LIB libspdk_iscsi.a 00:12:20.236 SO libspdk_iscsi.so.8.0 00:12:20.236 LIB libspdk_ftl.a 00:12:20.494 SYMLINK libspdk_iscsi.so 00:12:20.494 LIB libspdk_nvmf.a 00:12:20.494 SO libspdk_ftl.so.9.0 00:12:20.494 SO libspdk_nvmf.so.18.0 00:12:20.752 SYMLINK libspdk_nvmf.so 00:12:21.010 SYMLINK libspdk_ftl.so 00:12:21.269 CC module/env_dpdk/env_dpdk_rpc.o 00:12:21.269 CC module/accel/dsa/accel_dsa.o 00:12:21.269 CC module/blob/bdev/blob_bdev.o 00:12:21.269 CC module/keyring/file/keyring.o 00:12:21.269 CC module/accel/ioat/accel_ioat.o 00:12:21.269 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:12:21.269 CC module/accel/error/accel_error.o 00:12:21.269 CC module/accel/iaa/accel_iaa.o 00:12:21.269 CC module/sock/posix/posix.o 00:12:21.269 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:21.527 LIB libspdk_env_dpdk_rpc.a 00:12:21.527 SO libspdk_env_dpdk_rpc.so.6.0 00:12:21.527 LIB libspdk_scheduler_dpdk_governor.a 00:12:21.527 CC module/keyring/file/keyring_rpc.o 00:12:21.527 SO libspdk_scheduler_dpdk_governor.so.4.0 00:12:21.527 SYMLINK libspdk_env_dpdk_rpc.so 00:12:21.527 CC module/accel/dsa/accel_dsa_rpc.o 00:12:21.527 CC module/accel/error/accel_error_rpc.o 00:12:21.527 CC module/accel/ioat/accel_ioat_rpc.o 00:12:21.527 CC module/accel/iaa/accel_iaa_rpc.o 00:12:21.527 LIB libspdk_scheduler_dynamic.a 00:12:21.527 SYMLINK libspdk_scheduler_dpdk_governor.so 00:12:21.527 SO libspdk_scheduler_dynamic.so.4.0 00:12:21.527 LIB libspdk_blob_bdev.a 00:12:21.527 LIB libspdk_keyring_file.a 00:12:21.785 SO libspdk_blob_bdev.so.11.0 00:12:21.785 SYMLINK libspdk_scheduler_dynamic.so 00:12:21.785 SO libspdk_keyring_file.so.1.0 00:12:21.785 LIB libspdk_accel_error.a 00:12:21.785 LIB libspdk_accel_ioat.a 00:12:21.785 LIB libspdk_accel_dsa.a 00:12:21.785 LIB libspdk_accel_iaa.a 00:12:21.785 SYMLINK libspdk_blob_bdev.so 00:12:21.785 SO libspdk_accel_ioat.so.6.0 00:12:21.785 SO libspdk_accel_error.so.2.0 00:12:21.785 SO libspdk_accel_dsa.so.5.0 00:12:21.785 SO libspdk_accel_iaa.so.3.0 00:12:21.785 SYMLINK libspdk_keyring_file.so 00:12:21.785 CC module/scheduler/gscheduler/gscheduler.o 00:12:21.785 SYMLINK libspdk_accel_ioat.so 00:12:21.785 SYMLINK libspdk_accel_error.so 00:12:21.785 SYMLINK libspdk_accel_dsa.so 00:12:21.785 SYMLINK libspdk_accel_iaa.so 00:12:22.043 LIB libspdk_scheduler_gscheduler.a 00:12:22.043 SO libspdk_scheduler_gscheduler.so.4.0 00:12:22.043 CC module/bdev/delay/vbdev_delay.o 00:12:22.043 CC module/bdev/malloc/bdev_malloc.o 00:12:22.043 CC module/bdev/null/bdev_null.o 00:12:22.043 CC module/bdev/lvol/vbdev_lvol.o 00:12:22.043 CC module/bdev/gpt/gpt.o 00:12:22.043 CC module/blobfs/bdev/blobfs_bdev.o 00:12:22.043 CC module/bdev/nvme/bdev_nvme.o 00:12:22.043 CC module/bdev/error/vbdev_error.o 00:12:22.043 SYMLINK libspdk_scheduler_gscheduler.so 00:12:22.043 CC module/bdev/error/vbdev_error_rpc.o 00:12:22.301 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:22.301 CC module/bdev/gpt/vbdev_gpt.o 00:12:22.301 CC module/bdev/null/bdev_null_rpc.o 00:12:22.301 LIB libspdk_sock_posix.a 00:12:22.301 SO libspdk_sock_posix.so.6.0 00:12:22.301 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:22.301 LIB libspdk_bdev_error.a 00:12:22.301 SYMLINK libspdk_sock_posix.so 00:12:22.301 SO libspdk_bdev_error.so.6.0 00:12:22.301 LIB libspdk_blobfs_bdev.a 00:12:22.559 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:22.559 LIB libspdk_bdev_null.a 00:12:22.559 SO libspdk_blobfs_bdev.so.6.0 00:12:22.559 SO libspdk_bdev_null.so.6.0 00:12:22.559 SYMLINK libspdk_bdev_error.so 00:12:22.559 LIB libspdk_bdev_malloc.a 00:12:22.559 SYMLINK libspdk_bdev_null.so 00:12:22.559 SO libspdk_bdev_malloc.so.6.0 00:12:22.559 SYMLINK libspdk_blobfs_bdev.so 00:12:22.559 LIB libspdk_bdev_gpt.a 00:12:22.559 SO libspdk_bdev_gpt.so.6.0 00:12:22.559 CC module/bdev/passthru/vbdev_passthru.o 00:12:22.559 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:22.559 SYMLINK libspdk_bdev_malloc.so 00:12:22.559 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:22.559 LIB libspdk_bdev_delay.a 00:12:22.559 CC module/bdev/raid/bdev_raid.o 00:12:22.817 SO libspdk_bdev_delay.so.6.0 00:12:22.817 SYMLINK libspdk_bdev_gpt.so 00:12:22.817 CC module/bdev/split/vbdev_split.o 00:12:22.817 CC module/bdev/aio/bdev_aio.o 00:12:22.817 SYMLINK libspdk_bdev_delay.so 00:12:22.817 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:22.817 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:22.817 CC module/bdev/ftl/bdev_ftl.o 00:12:23.076 CC module/bdev/iscsi/bdev_iscsi.o 00:12:23.076 CC module/bdev/split/vbdev_split_rpc.o 00:12:23.076 LIB libspdk_bdev_passthru.a 00:12:23.076 LIB libspdk_bdev_lvol.a 00:12:23.076 SO libspdk_bdev_passthru.so.6.0 00:12:23.076 SO libspdk_bdev_lvol.so.6.0 00:12:23.076 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:12:23.076 SYMLINK libspdk_bdev_passthru.so 00:12:23.076 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:23.076 SYMLINK libspdk_bdev_lvol.so 00:12:23.076 CC module/bdev/nvme/nvme_rpc.o 00:12:23.076 LIB libspdk_bdev_split.a 00:12:23.334 LIB libspdk_bdev_zone_block.a 00:12:23.334 CC module/bdev/aio/bdev_aio_rpc.o 00:12:23.334 SO libspdk_bdev_split.so.6.0 00:12:23.334 SO libspdk_bdev_zone_block.so.6.0 00:12:23.334 CC module/bdev/ftl/bdev_ftl_rpc.o 00:12:23.334 SYMLINK libspdk_bdev_split.so 00:12:23.334 SYMLINK libspdk_bdev_zone_block.so 00:12:23.334 CC module/bdev/raid/bdev_raid_rpc.o 00:12:23.334 CC module/bdev/raid/bdev_raid_sb.o 00:12:23.334 LIB libspdk_bdev_aio.a 00:12:23.334 LIB libspdk_bdev_iscsi.a 00:12:23.334 SO libspdk_bdev_aio.so.6.0 00:12:23.334 SO libspdk_bdev_iscsi.so.6.0 00:12:23.592 CC module/bdev/nvme/bdev_mdns_client.o 00:12:23.592 CC module/bdev/virtio/bdev_virtio_scsi.o 00:12:23.592 SYMLINK libspdk_bdev_aio.so 00:12:23.592 CC module/bdev/nvme/vbdev_opal.o 00:12:23.592 SYMLINK libspdk_bdev_iscsi.so 00:12:23.592 LIB libspdk_bdev_ftl.a 00:12:23.592 CC module/bdev/nvme/vbdev_opal_rpc.o 00:12:23.592 SO libspdk_bdev_ftl.so.6.0 00:12:23.592 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:12:23.592 CC module/bdev/virtio/bdev_virtio_blk.o 00:12:23.592 SYMLINK libspdk_bdev_ftl.so 00:12:23.592 CC module/bdev/virtio/bdev_virtio_rpc.o 00:12:23.849 CC module/bdev/raid/raid0.o 00:12:23.849 CC module/bdev/raid/raid1.o 00:12:23.849 CC module/bdev/raid/concat.o 00:12:24.107 LIB libspdk_bdev_raid.a 00:12:24.107 SO libspdk_bdev_raid.so.6.0 00:12:24.107 LIB libspdk_bdev_virtio.a 00:12:24.107 SO libspdk_bdev_virtio.so.6.0 00:12:24.365 SYMLINK libspdk_bdev_raid.so 00:12:24.365 SYMLINK libspdk_bdev_virtio.so 00:12:24.932 LIB libspdk_bdev_nvme.a 00:12:24.932 SO libspdk_bdev_nvme.so.7.0 00:12:25.189 SYMLINK libspdk_bdev_nvme.so 00:12:25.755 CC module/event/subsystems/iobuf/iobuf.o 00:12:25.755 CC module/event/subsystems/scheduler/scheduler.o 00:12:25.755 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:25.755 CC module/event/subsystems/vmd/vmd.o 00:12:25.755 CC module/event/subsystems/sock/sock.o 00:12:25.755 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:25.755 CC module/event/subsystems/keyring/keyring.o 00:12:25.755 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:12:25.755 LIB libspdk_event_sock.a 00:12:25.755 LIB libspdk_event_vhost_blk.a 00:12:25.755 LIB libspdk_event_keyring.a 00:12:25.755 LIB libspdk_event_vmd.a 00:12:25.755 LIB libspdk_event_scheduler.a 00:12:26.013 LIB libspdk_event_iobuf.a 00:12:26.013 SO libspdk_event_sock.so.5.0 00:12:26.013 SO libspdk_event_scheduler.so.4.0 00:12:26.013 SO libspdk_event_vhost_blk.so.3.0 00:12:26.013 SO libspdk_event_keyring.so.1.0 00:12:26.013 SO libspdk_event_vmd.so.6.0 00:12:26.013 SO libspdk_event_iobuf.so.3.0 00:12:26.013 SYMLINK libspdk_event_keyring.so 00:12:26.013 SYMLINK libspdk_event_sock.so 00:12:26.013 SYMLINK libspdk_event_vhost_blk.so 00:12:26.013 SYMLINK libspdk_event_scheduler.so 00:12:26.013 SYMLINK libspdk_event_vmd.so 00:12:26.013 SYMLINK libspdk_event_iobuf.so 00:12:26.271 CC module/event/subsystems/accel/accel.o 00:12:26.528 LIB libspdk_event_accel.a 00:12:26.528 SO libspdk_event_accel.so.6.0 00:12:26.528 SYMLINK libspdk_event_accel.so 00:12:26.787 CC module/event/subsystems/bdev/bdev.o 00:12:27.045 LIB libspdk_event_bdev.a 00:12:27.045 SO libspdk_event_bdev.so.6.0 00:12:27.304 SYMLINK libspdk_event_bdev.so 00:12:27.304 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:27.304 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:27.304 CC module/event/subsystems/nbd/nbd.o 00:12:27.304 CC module/event/subsystems/ublk/ublk.o 00:12:27.563 CC module/event/subsystems/scsi/scsi.o 00:12:27.563 LIB libspdk_event_ublk.a 00:12:27.563 LIB libspdk_event_nbd.a 00:12:27.563 LIB libspdk_event_scsi.a 00:12:27.563 SO libspdk_event_ublk.so.3.0 00:12:27.563 SO libspdk_event_nbd.so.6.0 00:12:27.563 SO libspdk_event_scsi.so.6.0 00:12:27.821 SYMLINK libspdk_event_ublk.so 00:12:27.821 SYMLINK libspdk_event_nbd.so 00:12:27.821 LIB libspdk_event_nvmf.a 00:12:27.821 SYMLINK libspdk_event_scsi.so 00:12:27.821 SO libspdk_event_nvmf.so.6.0 00:12:27.821 SYMLINK libspdk_event_nvmf.so 00:12:28.080 CC module/event/subsystems/iscsi/iscsi.o 00:12:28.080 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:12:28.080 LIB libspdk_event_vhost_scsi.a 00:12:28.080 LIB libspdk_event_iscsi.a 00:12:28.080 SO libspdk_event_vhost_scsi.so.3.0 00:12:28.338 SO libspdk_event_iscsi.so.6.0 00:12:28.338 SYMLINK libspdk_event_vhost_scsi.so 00:12:28.338 SYMLINK libspdk_event_iscsi.so 00:12:28.338 SO libspdk.so.6.0 00:12:28.338 SYMLINK libspdk.so 00:12:28.597 TEST_HEADER include/spdk/accel.h 00:12:28.597 TEST_HEADER include/spdk/accel_module.h 00:12:28.597 TEST_HEADER include/spdk/assert.h 00:12:28.597 CXX app/trace/trace.o 00:12:28.597 TEST_HEADER include/spdk/barrier.h 00:12:28.597 TEST_HEADER include/spdk/base64.h 00:12:28.597 TEST_HEADER include/spdk/bdev.h 00:12:28.597 TEST_HEADER include/spdk/bdev_module.h 00:12:28.597 TEST_HEADER include/spdk/bdev_zone.h 00:12:28.884 TEST_HEADER include/spdk/bit_array.h 00:12:28.884 TEST_HEADER include/spdk/bit_pool.h 00:12:28.884 TEST_HEADER include/spdk/blob_bdev.h 00:12:28.884 CC app/trace_record/trace_record.o 00:12:28.884 TEST_HEADER include/spdk/blobfs_bdev.h 00:12:28.884 TEST_HEADER include/spdk/blobfs.h 00:12:28.884 TEST_HEADER include/spdk/blob.h 00:12:28.884 TEST_HEADER include/spdk/conf.h 00:12:28.884 TEST_HEADER include/spdk/config.h 00:12:28.884 TEST_HEADER include/spdk/cpuset.h 00:12:28.884 TEST_HEADER include/spdk/crc16.h 00:12:28.884 TEST_HEADER include/spdk/crc32.h 00:12:28.884 TEST_HEADER include/spdk/crc64.h 00:12:28.884 TEST_HEADER include/spdk/dif.h 00:12:28.884 TEST_HEADER include/spdk/dma.h 00:12:28.884 TEST_HEADER include/spdk/endian.h 00:12:28.884 TEST_HEADER include/spdk/env_dpdk.h 00:12:28.884 TEST_HEADER include/spdk/env.h 00:12:28.884 TEST_HEADER include/spdk/event.h 00:12:28.884 CC app/nvmf_tgt/nvmf_main.o 00:12:28.884 TEST_HEADER include/spdk/fd_group.h 00:12:28.884 TEST_HEADER include/spdk/fd.h 00:12:28.884 TEST_HEADER include/spdk/file.h 00:12:28.884 TEST_HEADER include/spdk/ftl.h 00:12:28.884 TEST_HEADER include/spdk/gpt_spec.h 00:12:28.884 TEST_HEADER include/spdk/hexlify.h 00:12:28.884 TEST_HEADER include/spdk/histogram_data.h 00:12:28.884 TEST_HEADER include/spdk/idxd.h 00:12:28.884 TEST_HEADER include/spdk/idxd_spec.h 00:12:28.884 TEST_HEADER include/spdk/init.h 00:12:28.884 TEST_HEADER include/spdk/ioat.h 00:12:28.884 TEST_HEADER include/spdk/ioat_spec.h 00:12:28.884 TEST_HEADER include/spdk/iscsi_spec.h 00:12:28.884 TEST_HEADER include/spdk/json.h 00:12:28.884 TEST_HEADER include/spdk/jsonrpc.h 00:12:28.884 TEST_HEADER include/spdk/keyring.h 00:12:28.884 TEST_HEADER include/spdk/keyring_module.h 00:12:28.884 TEST_HEADER include/spdk/likely.h 00:12:28.884 TEST_HEADER include/spdk/log.h 00:12:28.884 TEST_HEADER include/spdk/lvol.h 00:12:28.884 TEST_HEADER include/spdk/memory.h 00:12:28.884 TEST_HEADER include/spdk/mmio.h 00:12:28.884 CC examples/accel/perf/accel_perf.o 00:12:28.884 TEST_HEADER include/spdk/nbd.h 00:12:28.884 TEST_HEADER include/spdk/notify.h 00:12:28.884 TEST_HEADER include/spdk/nvme.h 00:12:28.884 TEST_HEADER include/spdk/nvme_intel.h 00:12:28.884 TEST_HEADER include/spdk/nvme_ocssd.h 00:12:28.884 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:12:28.884 TEST_HEADER include/spdk/nvme_spec.h 00:12:28.884 TEST_HEADER include/spdk/nvme_zns.h 00:12:28.884 TEST_HEADER include/spdk/nvmf_cmd.h 00:12:28.884 CC test/bdev/bdevio/bdevio.o 00:12:28.884 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:12:28.884 CC test/blobfs/mkfs/mkfs.o 00:12:28.884 TEST_HEADER include/spdk/nvmf.h 00:12:28.884 TEST_HEADER include/spdk/nvmf_spec.h 00:12:28.884 TEST_HEADER include/spdk/nvmf_transport.h 00:12:28.884 TEST_HEADER include/spdk/opal.h 00:12:28.884 TEST_HEADER include/spdk/opal_spec.h 00:12:28.884 TEST_HEADER include/spdk/pci_ids.h 00:12:28.884 CC test/accel/dif/dif.o 00:12:28.884 TEST_HEADER include/spdk/pipe.h 00:12:28.884 TEST_HEADER include/spdk/queue.h 00:12:28.884 TEST_HEADER include/spdk/reduce.h 00:12:28.884 TEST_HEADER include/spdk/rpc.h 00:12:28.884 TEST_HEADER include/spdk/scheduler.h 00:12:28.884 TEST_HEADER include/spdk/scsi.h 00:12:28.884 CC test/app/bdev_svc/bdev_svc.o 00:12:28.884 TEST_HEADER include/spdk/scsi_spec.h 00:12:28.884 TEST_HEADER include/spdk/sock.h 00:12:28.884 TEST_HEADER include/spdk/stdinc.h 00:12:28.884 CC test/dma/test_dma/test_dma.o 00:12:28.884 TEST_HEADER include/spdk/string.h 00:12:28.884 TEST_HEADER include/spdk/thread.h 00:12:28.884 TEST_HEADER include/spdk/trace.h 00:12:28.884 TEST_HEADER include/spdk/trace_parser.h 00:12:28.884 TEST_HEADER include/spdk/tree.h 00:12:28.884 TEST_HEADER include/spdk/ublk.h 00:12:28.884 TEST_HEADER include/spdk/util.h 00:12:28.884 TEST_HEADER include/spdk/uuid.h 00:12:28.884 TEST_HEADER include/spdk/version.h 00:12:28.884 TEST_HEADER include/spdk/vfio_user_pci.h 00:12:28.884 TEST_HEADER include/spdk/vfio_user_spec.h 00:12:28.884 TEST_HEADER include/spdk/vhost.h 00:12:28.884 TEST_HEADER include/spdk/vmd.h 00:12:28.884 TEST_HEADER include/spdk/xor.h 00:12:28.884 TEST_HEADER include/spdk/zipf.h 00:12:28.884 CXX test/cpp_headers/accel.o 00:12:29.142 LINK nvmf_tgt 00:12:29.142 LINK spdk_trace_record 00:12:29.142 LINK mkfs 00:12:29.142 LINK bdev_svc 00:12:29.142 CXX test/cpp_headers/accel_module.o 00:12:29.142 LINK spdk_trace 00:12:29.401 LINK dif 00:12:29.401 LINK bdevio 00:12:29.401 CXX test/cpp_headers/assert.o 00:12:29.401 LINK accel_perf 00:12:29.401 LINK test_dma 00:12:29.660 CC test/event/event_perf/event_perf.o 00:12:29.660 CXX test/cpp_headers/barrier.o 00:12:29.660 CC app/iscsi_tgt/iscsi_tgt.o 00:12:29.660 CC test/env/mem_callbacks/mem_callbacks.o 00:12:29.660 CC test/event/reactor/reactor.o 00:12:29.660 LINK event_perf 00:12:29.660 CC test/event/reactor_perf/reactor_perf.o 00:12:29.660 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:29.660 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:29.660 CXX test/cpp_headers/base64.o 00:12:29.918 LINK reactor 00:12:29.918 LINK iscsi_tgt 00:12:29.918 CC test/app/histogram_perf/histogram_perf.o 00:12:29.918 LINK reactor_perf 00:12:29.918 CC examples/bdev/hello_world/hello_bdev.o 00:12:29.918 CXX test/cpp_headers/bdev.o 00:12:29.918 LINK histogram_perf 00:12:30.176 CC examples/blob/hello_world/hello_blob.o 00:12:30.176 CC app/spdk_tgt/spdk_tgt.o 00:12:30.176 CC test/event/app_repeat/app_repeat.o 00:12:30.176 LINK hello_bdev 00:12:30.176 CXX test/cpp_headers/bdev_module.o 00:12:30.177 LINK mem_callbacks 00:12:30.177 LINK nvme_fuzz 00:12:30.177 CC examples/ioat/perf/perf.o 00:12:30.177 CC app/spdk_lspci/spdk_lspci.o 00:12:30.434 LINK hello_blob 00:12:30.434 LINK app_repeat 00:12:30.434 LINK spdk_tgt 00:12:30.434 CXX test/cpp_headers/bdev_zone.o 00:12:30.435 LINK spdk_lspci 00:12:30.435 CC test/env/vtophys/vtophys.o 00:12:30.435 LINK ioat_perf 00:12:30.435 CC app/spdk_nvme_perf/perf.o 00:12:30.435 CC examples/bdev/bdevperf/bdevperf.o 00:12:30.693 CXX test/cpp_headers/bit_array.o 00:12:30.693 LINK vtophys 00:12:30.693 CC test/event/scheduler/scheduler.o 00:12:30.693 CC examples/ioat/verify/verify.o 00:12:30.693 CC examples/blob/cli/blobcli.o 00:12:30.693 CC app/spdk_nvme_identify/identify.o 00:12:30.951 CXX test/cpp_headers/bit_pool.o 00:12:30.951 CC examples/nvme/hello_world/hello_world.o 00:12:30.951 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:30.951 LINK verify 00:12:30.951 CXX test/cpp_headers/blob_bdev.o 00:12:30.951 LINK scheduler 00:12:31.208 LINK env_dpdk_post_init 00:12:31.208 LINK hello_world 00:12:31.208 CC examples/nvme/reconnect/reconnect.o 00:12:31.208 CXX test/cpp_headers/blobfs_bdev.o 00:12:31.466 LINK blobcli 00:12:31.466 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:31.466 CC test/env/memory/memory_ut.o 00:12:31.466 CXX test/cpp_headers/blobfs.o 00:12:31.466 CC examples/sock/hello_world/hello_sock.o 00:12:31.466 LINK bdevperf 00:12:31.724 CXX test/cpp_headers/blob.o 00:12:31.724 LINK reconnect 00:12:31.724 LINK spdk_nvme_perf 00:12:31.724 CC app/spdk_nvme_discover/discovery_aer.o 00:12:31.724 LINK spdk_nvme_identify 00:12:31.724 CXX test/cpp_headers/conf.o 00:12:31.724 LINK hello_sock 00:12:31.982 CC examples/nvme/arbitration/arbitration.o 00:12:31.982 LINK spdk_nvme_discover 00:12:31.982 CC app/spdk_top/spdk_top.o 00:12:31.982 LINK iscsi_fuzz 00:12:31.982 CC app/vhost/vhost.o 00:12:31.982 CXX test/cpp_headers/config.o 00:12:31.982 CXX test/cpp_headers/cpuset.o 00:12:31.982 LINK nvme_manage 00:12:31.982 CC examples/nvme/hotplug/hotplug.o 00:12:32.241 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:32.241 CC examples/nvme/abort/abort.o 00:12:32.241 LINK vhost 00:12:32.241 CXX test/cpp_headers/crc16.o 00:12:32.241 LINK arbitration 00:12:32.241 LINK cmb_copy 00:12:32.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:12:32.499 LINK hotplug 00:12:32.499 CXX test/cpp_headers/crc32.o 00:12:32.499 LINK memory_ut 00:12:32.499 CC test/lvol/esnap/esnap.o 00:12:32.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:12:32.499 CC app/spdk_dd/spdk_dd.o 00:12:32.499 CXX test/cpp_headers/crc64.o 00:12:32.756 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:32.756 CC test/env/pci/pci_ut.o 00:12:32.756 CC test/nvme/aer/aer.o 00:12:32.756 LINK abort 00:12:32.756 CXX test/cpp_headers/dif.o 00:12:32.756 LINK pmr_persistence 00:12:33.014 CC app/fio/nvme/fio_plugin.o 00:12:33.014 CXX test/cpp_headers/dma.o 00:12:33.014 LINK spdk_dd 00:12:33.014 LINK spdk_top 00:12:33.014 LINK vhost_fuzz 00:12:33.014 LINK aer 00:12:33.014 CC app/fio/bdev/fio_plugin.o 00:12:33.273 LINK pci_ut 00:12:33.273 CXX test/cpp_headers/endian.o 00:12:33.273 CC examples/vmd/lsvmd/lsvmd.o 00:12:33.273 CC test/app/jsoncat/jsoncat.o 00:12:33.273 CC test/nvme/reset/reset.o 00:12:33.273 CXX test/cpp_headers/env_dpdk.o 00:12:33.273 LINK lsvmd 00:12:33.531 CC examples/util/zipf/zipf.o 00:12:33.531 LINK jsoncat 00:12:33.531 CC examples/nvmf/nvmf/nvmf.o 00:12:33.531 CXX test/cpp_headers/env.o 00:12:33.531 LINK spdk_nvme 00:12:33.531 LINK zipf 00:12:33.531 CC examples/vmd/led/led.o 00:12:33.790 LINK reset 00:12:33.790 LINK spdk_bdev 00:12:33.790 CC examples/thread/thread/thread_ex.o 00:12:33.790 CC test/app/stub/stub.o 00:12:33.790 CXX test/cpp_headers/event.o 00:12:33.790 LINK led 00:12:33.790 LINK nvmf 00:12:33.790 CC test/nvme/sgl/sgl.o 00:12:34.048 CC test/nvme/e2edp/nvme_dp.o 00:12:34.048 CC examples/idxd/perf/perf.o 00:12:34.048 LINK stub 00:12:34.048 CXX test/cpp_headers/fd_group.o 00:12:34.048 LINK thread 00:12:34.048 CXX test/cpp_headers/fd.o 00:12:34.307 CXX test/cpp_headers/file.o 00:12:34.307 CC test/rpc_client/rpc_client_test.o 00:12:34.307 CC examples/interrupt_tgt/interrupt_tgt.o 00:12:34.307 LINK sgl 00:12:34.307 CC test/nvme/overhead/overhead.o 00:12:34.307 LINK nvme_dp 00:12:34.307 CXX test/cpp_headers/ftl.o 00:12:34.307 CC test/thread/poller_perf/poller_perf.o 00:12:34.564 LINK idxd_perf 00:12:34.564 LINK interrupt_tgt 00:12:34.564 LINK rpc_client_test 00:12:34.564 CC test/nvme/err_injection/err_injection.o 00:12:34.564 CXX test/cpp_headers/gpt_spec.o 00:12:34.564 CC test/nvme/startup/startup.o 00:12:34.564 LINK poller_perf 00:12:34.564 LINK overhead 00:12:34.822 CC test/nvme/reserve/reserve.o 00:12:34.822 CC test/nvme/simple_copy/simple_copy.o 00:12:34.822 CXX test/cpp_headers/hexlify.o 00:12:34.822 LINK startup 00:12:34.822 LINK err_injection 00:12:34.822 CC test/nvme/connect_stress/connect_stress.o 00:12:34.822 CXX test/cpp_headers/histogram_data.o 00:12:34.822 CC test/nvme/boot_partition/boot_partition.o 00:12:35.079 LINK reserve 00:12:35.079 CXX test/cpp_headers/idxd.o 00:12:35.079 LINK connect_stress 00:12:35.079 LINK simple_copy 00:12:35.079 CC test/nvme/compliance/nvme_compliance.o 00:12:35.079 CC test/nvme/fused_ordering/fused_ordering.o 00:12:35.079 CC test/nvme/doorbell_aers/doorbell_aers.o 00:12:35.079 LINK boot_partition 00:12:35.337 CXX test/cpp_headers/idxd_spec.o 00:12:35.337 LINK doorbell_aers 00:12:35.337 CXX test/cpp_headers/init.o 00:12:35.337 CC test/nvme/fdp/fdp.o 00:12:35.337 CXX test/cpp_headers/ioat.o 00:12:35.596 LINK fused_ordering 00:12:35.596 CC test/nvme/cuse/cuse.o 00:12:35.596 CXX test/cpp_headers/ioat_spec.o 00:12:35.596 CXX test/cpp_headers/iscsi_spec.o 00:12:35.596 CXX test/cpp_headers/json.o 00:12:35.596 LINK nvme_compliance 00:12:35.596 CXX test/cpp_headers/jsonrpc.o 00:12:35.596 CXX test/cpp_headers/keyring.o 00:12:35.854 LINK fdp 00:12:35.854 CXX test/cpp_headers/keyring_module.o 00:12:35.854 CXX test/cpp_headers/likely.o 00:12:35.854 CXX test/cpp_headers/log.o 00:12:35.854 CXX test/cpp_headers/lvol.o 00:12:35.855 CXX test/cpp_headers/memory.o 00:12:35.855 CXX test/cpp_headers/mmio.o 00:12:36.113 CXX test/cpp_headers/nbd.o 00:12:36.113 CXX test/cpp_headers/notify.o 00:12:36.113 CXX test/cpp_headers/nvme.o 00:12:36.113 CXX test/cpp_headers/nvme_intel.o 00:12:36.113 CXX test/cpp_headers/nvme_ocssd.o 00:12:36.113 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:36.113 CXX test/cpp_headers/nvme_spec.o 00:12:36.113 CXX test/cpp_headers/nvme_zns.o 00:12:36.113 CXX test/cpp_headers/nvmf_cmd.o 00:12:36.113 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:36.113 CXX test/cpp_headers/nvmf.o 00:12:36.371 CXX test/cpp_headers/nvmf_spec.o 00:12:36.371 CXX test/cpp_headers/nvmf_transport.o 00:12:36.371 CXX test/cpp_headers/opal.o 00:12:36.371 CXX test/cpp_headers/opal_spec.o 00:12:36.371 CXX test/cpp_headers/pci_ids.o 00:12:36.371 CXX test/cpp_headers/pipe.o 00:12:36.371 CXX test/cpp_headers/queue.o 00:12:36.629 CXX test/cpp_headers/reduce.o 00:12:36.629 CXX test/cpp_headers/rpc.o 00:12:36.629 CXX test/cpp_headers/scheduler.o 00:12:36.629 CXX test/cpp_headers/scsi.o 00:12:36.629 CXX test/cpp_headers/scsi_spec.o 00:12:36.629 CXX test/cpp_headers/sock.o 00:12:36.629 CXX test/cpp_headers/stdinc.o 00:12:36.887 CXX test/cpp_headers/string.o 00:12:36.887 CXX test/cpp_headers/thread.o 00:12:36.887 LINK cuse 00:12:36.887 CXX test/cpp_headers/trace.o 00:12:36.887 CXX test/cpp_headers/trace_parser.o 00:12:36.887 CXX test/cpp_headers/tree.o 00:12:36.887 CXX test/cpp_headers/ublk.o 00:12:36.887 CXX test/cpp_headers/util.o 00:12:36.887 CXX test/cpp_headers/uuid.o 00:12:37.144 CXX test/cpp_headers/version.o 00:12:37.144 CXX test/cpp_headers/vfio_user_pci.o 00:12:37.144 CXX test/cpp_headers/vfio_user_spec.o 00:12:37.144 CXX test/cpp_headers/vhost.o 00:12:37.144 CXX test/cpp_headers/vmd.o 00:12:37.145 CXX test/cpp_headers/xor.o 00:12:37.145 CXX test/cpp_headers/zipf.o 00:12:39.081 LINK esnap 00:12:40.982 00:12:40.982 real 1m16.129s 00:12:40.982 user 7m33.718s 00:12:40.982 sys 1m42.040s 00:12:40.982 11:03:49 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:40.982 11:03:49 -- common/autotest_common.sh@10 -- $ set +x 00:12:40.982 ************************************ 00:12:40.982 END TEST make 00:12:40.982 ************************************ 00:12:40.982 11:03:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:12:40.982 11:03:49 -- pm/common@30 -- $ signal_monitor_resources TERM 00:12:40.982 11:03:49 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:12:40.982 11:03:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.982 11:03:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:12:40.982 11:03:49 -- pm/common@45 -- $ pid=5189 00:12:40.982 11:03:49 -- pm/common@52 -- $ sudo kill -TERM 5189 00:12:40.982 11:03:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.982 11:03:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:12:40.982 11:03:49 -- pm/common@45 -- $ pid=5188 00:12:40.982 11:03:49 -- pm/common@52 -- $ sudo kill -TERM 5188 00:12:41.239 11:03:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.239 11:03:49 -- nvmf/common.sh@7 -- # uname -s 00:12:41.239 11:03:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.239 11:03:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.239 11:03:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.239 11:03:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.239 11:03:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.239 11:03:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.239 11:03:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.239 11:03:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.239 11:03:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.239 11:03:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.239 11:03:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:12:41.239 11:03:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:12:41.239 11:03:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.239 11:03:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.239 11:03:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.239 11:03:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.239 11:03:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.239 11:03:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.239 11:03:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.239 11:03:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.239 11:03:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.240 11:03:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.240 11:03:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.240 11:03:49 -- paths/export.sh@5 -- # export PATH 00:12:41.240 11:03:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.240 11:03:49 -- nvmf/common.sh@47 -- # : 0 00:12:41.240 11:03:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.240 11:03:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.240 11:03:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.240 11:03:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.240 11:03:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.240 11:03:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.240 11:03:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.240 11:03:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.240 11:03:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:12:41.240 11:03:49 -- spdk/autotest.sh@32 -- # uname -s 00:12:41.240 11:03:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:12:41.240 11:03:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:12:41.240 11:03:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:41.240 11:03:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:12:41.240 11:03:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:41.240 11:03:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:12:41.240 11:03:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:12:41.240 11:03:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:12:41.240 11:03:49 -- spdk/autotest.sh@48 -- # udevadm_pid=54091 00:12:41.240 11:03:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:12:41.240 11:03:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:12:41.240 11:03:49 -- pm/common@17 -- # local monitor 00:12:41.240 11:03:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:41.240 11:03:49 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54093 00:12:41.240 11:03:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:41.240 11:03:49 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54095 00:12:41.240 11:03:49 -- pm/common@26 -- # sleep 1 00:12:41.240 11:03:49 -- pm/common@21 -- # date +%s 00:12:41.240 11:03:49 -- pm/common@21 -- # date +%s 00:12:41.240 11:03:49 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713438229 00:12:41.240 11:03:49 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713438229 00:12:41.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713438229_collect-vmstat.pm.log 00:12:41.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713438229_collect-cpu-load.pm.log 00:12:42.173 11:03:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:42.173 11:03:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:42.173 11:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:42.173 11:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.173 11:03:50 -- spdk/autotest.sh@59 -- # create_test_list 00:12:42.173 11:03:50 -- common/autotest_common.sh@734 -- # xtrace_disable 00:12:42.173 11:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.173 11:03:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:12:42.173 11:03:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:12:42.173 11:03:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:12:42.173 11:03:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:12:42.173 11:03:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:12:42.173 11:03:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:42.173 11:03:50 -- common/autotest_common.sh@1441 -- # uname 00:12:42.173 11:03:50 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:12:42.173 11:03:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:42.173 11:03:50 -- common/autotest_common.sh@1461 -- # uname 00:12:42.173 11:03:50 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:12:42.173 11:03:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:12:42.173 11:03:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:12:42.173 11:03:50 -- spdk/autotest.sh@72 -- # hash lcov 00:12:42.173 11:03:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:12:42.173 11:03:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:12:42.173 --rc lcov_branch_coverage=1 00:12:42.173 --rc lcov_function_coverage=1 00:12:42.173 --rc genhtml_branch_coverage=1 00:12:42.173 --rc genhtml_function_coverage=1 00:12:42.173 --rc genhtml_legend=1 00:12:42.173 --rc geninfo_all_blocks=1 00:12:42.173 ' 00:12:42.173 11:03:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:12:42.173 --rc lcov_branch_coverage=1 00:12:42.173 --rc lcov_function_coverage=1 00:12:42.173 --rc genhtml_branch_coverage=1 00:12:42.173 --rc genhtml_function_coverage=1 00:12:42.173 --rc genhtml_legend=1 00:12:42.173 --rc geninfo_all_blocks=1 00:12:42.173 ' 00:12:42.173 11:03:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:12:42.173 --rc lcov_branch_coverage=1 00:12:42.173 --rc lcov_function_coverage=1 00:12:42.173 --rc genhtml_branch_coverage=1 00:12:42.173 --rc genhtml_function_coverage=1 00:12:42.173 --rc genhtml_legend=1 00:12:42.173 --rc geninfo_all_blocks=1 00:12:42.173 --no-external' 00:12:42.173 11:03:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:12:42.173 --rc lcov_branch_coverage=1 00:12:42.173 --rc lcov_function_coverage=1 00:12:42.173 --rc genhtml_branch_coverage=1 00:12:42.173 --rc genhtml_function_coverage=1 00:12:42.173 --rc genhtml_legend=1 00:12:42.173 --rc geninfo_all_blocks=1 00:12:42.173 --no-external' 00:12:42.173 11:03:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:12:42.432 lcov: LCOV version 1.14 00:12:42.432 11:03:50 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:12:50.542 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:12:50.542 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:12:50.542 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:12:50.542 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:12:50.542 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:12:50.542 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:12:58.652 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:12:58.652 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:13:10.853 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:13:10.853 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:13:10.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:13:10.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:13:15.050 11:04:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:13:15.050 11:04:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:15.050 11:04:22 -- common/autotest_common.sh@10 -- # set +x 00:13:15.050 11:04:22 -- spdk/autotest.sh@91 -- # rm -f 00:13:15.050 11:04:22 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:15.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:15.050 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:13:15.050 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:15.050 11:04:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:13:15.050 11:04:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:15.050 11:04:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:15.050 11:04:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:15.050 11:04:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.050 11:04:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:15.050 11:04:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:15.050 11:04:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.050 11:04:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:15.050 11:04:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:15.050 11:04:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.050 11:04:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:13:15.050 11:04:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:13:15.050 11:04:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.050 11:04:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:13:15.050 11:04:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:13:15.050 11:04:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:15.050 11:04:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.050 11:04:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:13:15.050 11:04:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:15.050 11:04:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:15.050 11:04:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:13:15.050 11:04:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:13:15.050 11:04:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:15.050 No valid GPT data, bailing 00:13:15.050 11:04:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:15.050 11:04:23 -- scripts/common.sh@391 -- # pt= 00:13:15.050 11:04:23 -- scripts/common.sh@392 -- # return 1 00:13:15.050 11:04:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:15.050 1+0 records in 00:13:15.050 1+0 records out 00:13:15.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395247 s, 265 MB/s 00:13:15.050 11:04:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:15.050 11:04:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:15.050 11:04:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:13:15.050 11:04:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:13:15.050 11:04:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:13:15.050 No valid GPT data, bailing 00:13:15.050 11:04:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:15.308 11:04:23 -- scripts/common.sh@391 -- # pt= 00:13:15.308 11:04:23 -- scripts/common.sh@392 -- # return 1 00:13:15.308 11:04:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:13:15.308 1+0 records in 00:13:15.308 1+0 records out 00:13:15.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389442 s, 269 MB/s 00:13:15.308 11:04:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:15.308 11:04:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:15.308 11:04:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:13:15.308 11:04:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:13:15.308 11:04:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:13:15.308 No valid GPT data, bailing 00:13:15.308 11:04:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:13:15.308 11:04:23 -- scripts/common.sh@391 -- # pt= 00:13:15.308 11:04:23 -- scripts/common.sh@392 -- # return 1 00:13:15.308 11:04:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:13:15.308 1+0 records in 00:13:15.308 1+0 records out 00:13:15.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535534 s, 196 MB/s 00:13:15.308 11:04:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:15.308 11:04:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:15.308 11:04:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:13:15.308 11:04:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:13:15.308 11:04:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:13:15.308 No valid GPT data, bailing 00:13:15.308 11:04:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:13:15.308 11:04:23 -- scripts/common.sh@391 -- # pt= 00:13:15.308 11:04:23 -- scripts/common.sh@392 -- # return 1 00:13:15.308 11:04:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:13:15.308 1+0 records in 00:13:15.308 1+0 records out 00:13:15.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548138 s, 191 MB/s 00:13:15.308 11:04:23 -- spdk/autotest.sh@118 -- # sync 00:13:15.308 11:04:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:15.308 11:04:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:15.308 11:04:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:13:17.261 11:04:25 -- spdk/autotest.sh@124 -- # uname -s 00:13:17.261 11:04:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:13:17.261 11:04:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:17.261 11:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.261 11:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.261 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.261 ************************************ 00:13:17.261 START TEST setup.sh 00:13:17.261 ************************************ 00:13:17.261 11:04:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:13:17.261 * Looking for test storage... 00:13:17.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:17.261 11:04:25 -- setup/test-setup.sh@10 -- # uname -s 00:13:17.261 11:04:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:13:17.261 11:04:25 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:17.261 11:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.261 11:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.261 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.520 ************************************ 00:13:17.520 START TEST acl 00:13:17.520 ************************************ 00:13:17.520 11:04:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:13:17.520 * Looking for test storage... 00:13:17.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:17.520 11:04:25 -- setup/acl.sh@10 -- # get_zoned_devs 00:13:17.520 11:04:25 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:17.520 11:04:25 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:17.520 11:04:25 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:17.520 11:04:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.520 11:04:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:17.520 11:04:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:17.520 11:04:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.520 11:04:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:17.520 11:04:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:17.520 11:04:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.520 11:04:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:13:17.520 11:04:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:13:17.520 11:04:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.520 11:04:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:13:17.520 11:04:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:13:17.520 11:04:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:17.520 11:04:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.520 11:04:25 -- setup/acl.sh@12 -- # devs=() 00:13:17.520 11:04:25 -- setup/acl.sh@12 -- # declare -a devs 00:13:17.520 11:04:25 -- setup/acl.sh@13 -- # drivers=() 00:13:17.520 11:04:25 -- setup/acl.sh@13 -- # declare -A drivers 00:13:17.520 11:04:25 -- setup/acl.sh@51 -- # setup reset 00:13:17.520 11:04:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:17.520 11:04:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:18.086 11:04:26 -- setup/acl.sh@52 -- # collect_setup_devs 00:13:18.086 11:04:26 -- setup/acl.sh@16 -- # local dev driver 00:13:18.086 11:04:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:18.086 11:04:26 -- setup/acl.sh@15 -- # setup output status 00:13:18.086 11:04:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:18.086 11:04:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # continue 00:13:19.018 11:04:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 Hugepages 00:13:19.018 node hugesize free / total 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # continue 00:13:19.018 11:04:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 00:13:19.018 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:13:19.018 11:04:26 -- setup/acl.sh@19 -- # continue 00:13:19.018 11:04:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 11:04:27 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:13:19.018 11:04:27 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:13:19.018 11:04:27 -- setup/acl.sh@20 -- # continue 00:13:19.018 11:04:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 11:04:27 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:13:19.018 11:04:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:13:19.018 11:04:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:19.018 11:04:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:13:19.018 11:04:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:13:19.018 11:04:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 11:04:27 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:13:19.018 11:04:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:13:19.018 11:04:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:19.018 11:04:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:13:19.018 11:04:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:13:19.018 11:04:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:13:19.018 11:04:27 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:13:19.018 11:04:27 -- setup/acl.sh@54 -- # run_test denied denied 00:13:19.018 11:04:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.018 11:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.018 11:04:27 -- common/autotest_common.sh@10 -- # set +x 00:13:19.276 ************************************ 00:13:19.276 START TEST denied 00:13:19.276 ************************************ 00:13:19.276 11:04:27 -- common/autotest_common.sh@1111 -- # denied 00:13:19.276 11:04:27 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:13:19.276 11:04:27 -- setup/acl.sh@38 -- # setup output config 00:13:19.276 11:04:27 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:13:19.276 11:04:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:19.276 11:04:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:20.210 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:13:20.210 11:04:28 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:13:20.210 11:04:28 -- setup/acl.sh@28 -- # local dev driver 00:13:20.210 11:04:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:13:20.210 11:04:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:13:20.210 11:04:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:13:20.210 11:04:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:13:20.210 11:04:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:13:20.210 11:04:28 -- setup/acl.sh@41 -- # setup reset 00:13:20.210 11:04:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:20.210 11:04:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:20.776 00:13:20.776 real 0m1.447s 00:13:20.776 user 0m0.547s 00:13:20.776 sys 0m0.812s 00:13:20.776 11:04:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:20.776 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.776 ************************************ 00:13:20.776 END TEST denied 00:13:20.776 ************************************ 00:13:20.776 11:04:28 -- setup/acl.sh@55 -- # run_test allowed allowed 00:13:20.776 11:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:20.776 11:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.776 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.776 ************************************ 00:13:20.776 START TEST allowed 00:13:20.776 ************************************ 00:13:20.776 11:04:28 -- common/autotest_common.sh@1111 -- # allowed 00:13:20.776 11:04:28 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:13:20.776 11:04:28 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:13:20.776 11:04:28 -- setup/acl.sh@45 -- # setup output config 00:13:20.776 11:04:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:20.776 11:04:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:21.732 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.732 11:04:29 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:13:21.732 11:04:29 -- setup/acl.sh@28 -- # local dev driver 00:13:21.732 11:04:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:13:21.732 11:04:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:13:21.732 11:04:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:13:21.732 11:04:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:13:21.732 11:04:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:13:21.732 11:04:29 -- setup/acl.sh@48 -- # setup reset 00:13:21.732 11:04:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:21.732 11:04:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:22.297 00:13:22.297 real 0m1.483s 00:13:22.297 user 0m0.626s 00:13:22.297 sys 0m0.850s 00:13:22.297 ************************************ 00:13:22.297 END TEST allowed 00:13:22.298 ************************************ 00:13:22.298 11:04:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.298 11:04:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.298 00:13:22.298 real 0m4.863s 00:13:22.298 user 0m2.044s 00:13:22.298 sys 0m2.714s 00:13:22.298 11:04:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.298 ************************************ 00:13:22.298 END TEST acl 00:13:22.298 11:04:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.298 ************************************ 00:13:22.298 11:04:30 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:22.298 11:04:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:22.298 11:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.298 11:04:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.298 ************************************ 00:13:22.298 START TEST hugepages 00:13:22.298 ************************************ 00:13:22.298 11:04:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:13:22.557 * Looking for test storage... 00:13:22.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:22.557 11:04:30 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:13:22.557 11:04:30 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:13:22.557 11:04:30 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:13:22.557 11:04:30 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:13:22.557 11:04:30 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:13:22.557 11:04:30 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:13:22.557 11:04:30 -- setup/common.sh@17 -- # local get=Hugepagesize 00:13:22.557 11:04:30 -- setup/common.sh@18 -- # local node= 00:13:22.557 11:04:30 -- setup/common.sh@19 -- # local var val 00:13:22.557 11:04:30 -- setup/common.sh@20 -- # local mem_f mem 00:13:22.557 11:04:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:22.557 11:04:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:22.557 11:04:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:22.557 11:04:30 -- setup/common.sh@28 -- # mapfile -t mem 00:13:22.557 11:04:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5278516 kB' 'MemAvailable: 7378512 kB' 'Buffers: 2436 kB' 'Cached: 2309528 kB' 'SwapCached: 0 kB' 'Active: 875192 kB' 'Inactive: 1543048 kB' 'Active(anon): 116764 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 107896 kB' 'Mapped: 48828 kB' 'Shmem: 10488 kB' 'KReclaimable: 70920 kB' 'Slab: 146056 kB' 'SReclaimable: 70920 kB' 'SUnreclaim: 75136 kB' 'KernelStack: 6560 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 340280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.557 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.557 11:04:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # continue 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # IFS=': ' 00:13:22.558 11:04:30 -- setup/common.sh@31 -- # read -r var val _ 00:13:22.558 11:04:30 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:13:22.558 11:04:30 -- setup/common.sh@33 -- # echo 2048 00:13:22.558 11:04:30 -- setup/common.sh@33 -- # return 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:13:22.558 11:04:30 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:13:22.558 11:04:30 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:13:22.558 11:04:30 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:13:22.558 11:04:30 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:13:22.558 11:04:30 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:13:22.558 11:04:30 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:13:22.558 11:04:30 -- setup/hugepages.sh@207 -- # get_nodes 00:13:22.558 11:04:30 -- setup/hugepages.sh@27 -- # local node 00:13:22.558 11:04:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:22.558 11:04:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:13:22.558 11:04:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:22.558 11:04:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:22.558 11:04:30 -- setup/hugepages.sh@208 -- # clear_hp 00:13:22.558 11:04:30 -- setup/hugepages.sh@37 -- # local node hp 00:13:22.558 11:04:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:22.558 11:04:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:22.558 11:04:30 -- setup/hugepages.sh@41 -- # echo 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:22.558 11:04:30 -- setup/hugepages.sh@41 -- # echo 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:22.558 11:04:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:22.558 11:04:30 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:13:22.558 11:04:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:22.558 11:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.558 11:04:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.558 ************************************ 00:13:22.558 START TEST default_setup 00:13:22.558 ************************************ 00:13:22.558 11:04:30 -- common/autotest_common.sh@1111 -- # default_setup 00:13:22.558 11:04:30 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:22.558 11:04:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:22.558 11:04:30 -- setup/hugepages.sh@51 -- # shift 00:13:22.558 11:04:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:22.558 11:04:30 -- setup/hugepages.sh@52 -- # local node_ids 00:13:22.558 11:04:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:22.558 11:04:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:22.558 11:04:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:22.558 11:04:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:22.558 11:04:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:22.558 11:04:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:22.558 11:04:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:22.558 11:04:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:22.558 11:04:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:22.558 11:04:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:22.558 11:04:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:22.558 11:04:30 -- setup/hugepages.sh@73 -- # return 0 00:13:22.558 11:04:30 -- setup/hugepages.sh@137 -- # setup output 00:13:22.558 11:04:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:22.558 11:04:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:23.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.383 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.383 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.383 11:04:31 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:13:23.383 11:04:31 -- setup/hugepages.sh@89 -- # local node 00:13:23.383 11:04:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:23.383 11:04:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:23.383 11:04:31 -- setup/hugepages.sh@92 -- # local surp 00:13:23.383 11:04:31 -- setup/hugepages.sh@93 -- # local resv 00:13:23.383 11:04:31 -- setup/hugepages.sh@94 -- # local anon 00:13:23.383 11:04:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:23.383 11:04:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:23.383 11:04:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:23.383 11:04:31 -- setup/common.sh@18 -- # local node= 00:13:23.383 11:04:31 -- setup/common.sh@19 -- # local var val 00:13:23.383 11:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:13:23.383 11:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:23.383 11:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:23.383 11:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:23.383 11:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:13:23.383 11:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:23.383 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.383 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.383 11:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7377160 kB' 'MemAvailable: 9476984 kB' 'Buffers: 2436 kB' 'Cached: 2309524 kB' 'SwapCached: 0 kB' 'Active: 891460 kB' 'Inactive: 1543048 kB' 'Active(anon): 133032 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'AnonPages: 124192 kB' 'Mapped: 49028 kB' 'Shmem: 10468 kB' 'KReclaimable: 70576 kB' 'Slab: 145728 kB' 'SReclaimable: 70576 kB' 'SUnreclaim: 75152 kB' 'KernelStack: 6512 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:23.383 11:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.383 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.383 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.383 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.383 11:04:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.383 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.383 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.384 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.384 11:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:23.384 11:04:31 -- setup/common.sh@33 -- # echo 0 00:13:23.384 11:04:31 -- setup/common.sh@33 -- # return 0 00:13:23.385 11:04:31 -- setup/hugepages.sh@97 -- # anon=0 00:13:23.385 11:04:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:23.385 11:04:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:23.385 11:04:31 -- setup/common.sh@18 -- # local node= 00:13:23.385 11:04:31 -- setup/common.sh@19 -- # local var val 00:13:23.385 11:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:13:23.385 11:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:23.385 11:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:23.385 11:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:23.385 11:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:13:23.385 11:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7376660 kB' 'MemAvailable: 9476484 kB' 'Buffers: 2436 kB' 'Cached: 2309524 kB' 'SwapCached: 0 kB' 'Active: 891208 kB' 'Inactive: 1543048 kB' 'Active(anon): 132780 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'AnonPages: 123932 kB' 'Mapped: 49028 kB' 'Shmem: 10468 kB' 'KReclaimable: 70576 kB' 'Slab: 145724 kB' 'SReclaimable: 70576 kB' 'SUnreclaim: 75148 kB' 'KernelStack: 6480 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.385 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.385 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.386 11:04:31 -- setup/common.sh@33 -- # echo 0 00:13:23.386 11:04:31 -- setup/common.sh@33 -- # return 0 00:13:23.386 11:04:31 -- setup/hugepages.sh@99 -- # surp=0 00:13:23.386 11:04:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:23.386 11:04:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:23.386 11:04:31 -- setup/common.sh@18 -- # local node= 00:13:23.386 11:04:31 -- setup/common.sh@19 -- # local var val 00:13:23.386 11:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:13:23.386 11:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:23.386 11:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:23.386 11:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:23.386 11:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:13:23.386 11:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7376660 kB' 'MemAvailable: 9476488 kB' 'Buffers: 2436 kB' 'Cached: 2309520 kB' 'SwapCached: 0 kB' 'Active: 891020 kB' 'Inactive: 1543048 kB' 'Active(anon): 132592 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'AnonPages: 123748 kB' 'Mapped: 48900 kB' 'Shmem: 10464 kB' 'KReclaimable: 70580 kB' 'Slab: 145728 kB' 'SReclaimable: 70580 kB' 'SUnreclaim: 75148 kB' 'KernelStack: 6480 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.386 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.386 11:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.387 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.387 11:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.646 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.646 11:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:23.647 11:04:31 -- setup/common.sh@33 -- # echo 0 00:13:23.647 11:04:31 -- setup/common.sh@33 -- # return 0 00:13:23.647 11:04:31 -- setup/hugepages.sh@100 -- # resv=0 00:13:23.647 nr_hugepages=1024 00:13:23.647 11:04:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:23.647 resv_hugepages=0 00:13:23.647 11:04:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:23.647 surplus_hugepages=0 00:13:23.647 11:04:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:23.647 anon_hugepages=0 00:13:23.647 11:04:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:23.647 11:04:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:23.647 11:04:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:23.647 11:04:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:23.647 11:04:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:23.647 11:04:31 -- setup/common.sh@18 -- # local node= 00:13:23.647 11:04:31 -- setup/common.sh@19 -- # local var val 00:13:23.647 11:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:13:23.647 11:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:23.647 11:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:23.647 11:04:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:23.647 11:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:13:23.647 11:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7376156 kB' 'MemAvailable: 9475984 kB' 'Buffers: 2436 kB' 'Cached: 2309520 kB' 'SwapCached: 0 kB' 'Active: 890916 kB' 'Inactive: 1543048 kB' 'Active(anon): 132488 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'AnonPages: 123712 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 70580 kB' 'Slab: 145728 kB' 'SReclaimable: 70580 kB' 'SUnreclaim: 75148 kB' 'KernelStack: 6464 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.647 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.647 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.648 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.648 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:23.648 11:04:31 -- setup/common.sh@33 -- # echo 1024 00:13:23.648 11:04:31 -- setup/common.sh@33 -- # return 0 00:13:23.648 11:04:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:23.648 11:04:31 -- setup/hugepages.sh@112 -- # get_nodes 00:13:23.648 11:04:31 -- setup/hugepages.sh@27 -- # local node 00:13:23.648 11:04:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:23.648 11:04:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:23.648 11:04:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:23.648 11:04:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:23.648 11:04:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:23.648 11:04:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:23.648 11:04:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:23.648 11:04:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:23.648 11:04:31 -- setup/common.sh@18 -- # local node=0 00:13:23.648 11:04:31 -- setup/common.sh@19 -- # local var val 00:13:23.648 11:04:31 -- setup/common.sh@20 -- # local mem_f mem 00:13:23.648 11:04:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:23.648 11:04:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:23.649 11:04:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:23.649 11:04:31 -- setup/common.sh@28 -- # mapfile -t mem 00:13:23.649 11:04:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7376156 kB' 'MemUsed: 4865824 kB' 'SwapCached: 0 kB' 'Active: 890896 kB' 'Inactive: 1543048 kB' 'Active(anon): 132468 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'FilePages: 2311956 kB' 'Mapped: 48852 kB' 'AnonPages: 123692 kB' 'Shmem: 10464 kB' 'KernelStack: 6500 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70580 kB' 'Slab: 145728 kB' 'SReclaimable: 70580 kB' 'SUnreclaim: 75148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # continue 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # IFS=': ' 00:13:23.649 11:04:31 -- setup/common.sh@31 -- # read -r var val _ 00:13:23.649 11:04:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:23.649 11:04:31 -- setup/common.sh@33 -- # echo 0 00:13:23.649 11:04:31 -- setup/common.sh@33 -- # return 0 00:13:23.649 11:04:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:23.650 11:04:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:23.650 11:04:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:23.650 11:04:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:23.650 node0=1024 expecting 1024 00:13:23.650 11:04:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:23.650 11:04:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:23.650 00:13:23.650 real 0m1.005s 00:13:23.650 user 0m0.474s 00:13:23.650 sys 0m0.501s 00:13:23.650 11:04:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:23.650 11:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:23.650 ************************************ 00:13:23.650 END TEST default_setup 00:13:23.650 ************************************ 00:13:23.650 11:04:31 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:13:23.650 11:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:23.650 11:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.650 11:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:23.650 ************************************ 00:13:23.650 START TEST per_node_1G_alloc 00:13:23.650 ************************************ 00:13:23.650 11:04:31 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:13:23.650 11:04:31 -- setup/hugepages.sh@143 -- # local IFS=, 00:13:23.650 11:04:31 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:13:23.650 11:04:31 -- setup/hugepages.sh@49 -- # local size=1048576 00:13:23.650 11:04:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:23.650 11:04:31 -- setup/hugepages.sh@51 -- # shift 00:13:23.650 11:04:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:23.650 11:04:31 -- setup/hugepages.sh@52 -- # local node_ids 00:13:23.650 11:04:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:23.650 11:04:31 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:23.650 11:04:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:23.650 11:04:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:23.650 11:04:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:23.650 11:04:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:23.650 11:04:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:23.650 11:04:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:23.650 11:04:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:23.650 11:04:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:23.650 11:04:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:23.650 11:04:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:13:23.650 11:04:31 -- setup/hugepages.sh@73 -- # return 0 00:13:23.650 11:04:31 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:13:23.650 11:04:31 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:13:23.650 11:04:31 -- setup/hugepages.sh@146 -- # setup output 00:13:23.650 11:04:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:23.650 11:04:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.218 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.218 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.218 11:04:32 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:13:24.218 11:04:32 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:13:24.218 11:04:32 -- setup/hugepages.sh@89 -- # local node 00:13:24.218 11:04:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:24.218 11:04:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:24.218 11:04:32 -- setup/hugepages.sh@92 -- # local surp 00:13:24.218 11:04:32 -- setup/hugepages.sh@93 -- # local resv 00:13:24.218 11:04:32 -- setup/hugepages.sh@94 -- # local anon 00:13:24.218 11:04:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:24.218 11:04:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:24.218 11:04:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:24.218 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.218 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.218 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.218 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.218 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.218 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.218 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.218 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.218 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.218 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8427580 kB' 'MemAvailable: 10527444 kB' 'Buffers: 2436 kB' 'Cached: 2309552 kB' 'SwapCached: 0 kB' 'Active: 891740 kB' 'Inactive: 1543096 kB' 'Active(anon): 133312 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 124688 kB' 'Mapped: 49040 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145760 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75200 kB' 'KernelStack: 6564 kB' 'PageTables: 4672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.219 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.219 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.220 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.220 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.220 11:04:32 -- setup/hugepages.sh@97 -- # anon=0 00:13:24.220 11:04:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:24.220 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:24.220 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.220 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.220 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.220 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.220 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.220 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.220 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.220 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8427580 kB' 'MemAvailable: 10527448 kB' 'Buffers: 2436 kB' 'Cached: 2309556 kB' 'SwapCached: 0 kB' 'Active: 891568 kB' 'Inactive: 1543100 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 124280 kB' 'Mapped: 48980 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145756 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75196 kB' 'KernelStack: 6548 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.220 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.220 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.221 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.221 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.221 11:04:32 -- setup/hugepages.sh@99 -- # surp=0 00:13:24.221 11:04:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:24.221 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:24.221 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.221 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.221 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.221 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.221 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.221 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.221 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.221 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8427580 kB' 'MemAvailable: 10527448 kB' 'Buffers: 2436 kB' 'Cached: 2309556 kB' 'SwapCached: 0 kB' 'Active: 891424 kB' 'Inactive: 1543100 kB' 'Active(anon): 132996 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 124096 kB' 'Mapped: 48924 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145752 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75192 kB' 'KernelStack: 6544 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.221 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.221 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.222 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.222 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.222 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.222 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.222 11:04:32 -- setup/hugepages.sh@100 -- # resv=0 00:13:24.222 nr_hugepages=512 00:13:24.223 11:04:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:24.223 resv_hugepages=0 00:13:24.223 11:04:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:24.223 surplus_hugepages=0 00:13:24.223 11:04:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:24.223 anon_hugepages=0 00:13:24.223 11:04:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:24.223 11:04:32 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:24.223 11:04:32 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:24.223 11:04:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:24.223 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:24.223 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.223 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.223 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.223 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.223 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.223 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.223 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.223 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8427580 kB' 'MemAvailable: 10527448 kB' 'Buffers: 2436 kB' 'Cached: 2309556 kB' 'SwapCached: 0 kB' 'Active: 891152 kB' 'Inactive: 1543100 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48924 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145752 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75192 kB' 'KernelStack: 6528 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.223 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.223 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.224 11:04:32 -- setup/common.sh@33 -- # echo 512 00:13:24.224 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.224 11:04:32 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:24.224 11:04:32 -- setup/hugepages.sh@112 -- # get_nodes 00:13:24.224 11:04:32 -- setup/hugepages.sh@27 -- # local node 00:13:24.224 11:04:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:24.224 11:04:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:24.224 11:04:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:24.224 11:04:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:24.224 11:04:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:24.224 11:04:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:24.224 11:04:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:24.224 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:24.224 11:04:32 -- setup/common.sh@18 -- # local node=0 00:13:24.224 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.224 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.224 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.224 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:24.224 11:04:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:24.224 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.224 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.224 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8427580 kB' 'MemUsed: 3814400 kB' 'SwapCached: 0 kB' 'Active: 891152 kB' 'Inactive: 1543100 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'FilePages: 2311992 kB' 'Mapped: 48924 kB' 'AnonPages: 124116 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145752 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.224 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.224 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.225 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.225 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.225 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.225 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.225 11:04:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:24.225 11:04:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:24.225 11:04:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:24.225 11:04:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:24.225 node0=512 expecting 512 00:13:24.225 11:04:32 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:24.225 00:13:24.225 real 0m0.515s 00:13:24.225 user 0m0.265s 00:13:24.225 sys 0m0.283s 00:13:24.225 11:04:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:24.225 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:24.225 ************************************ 00:13:24.225 END TEST per_node_1G_alloc 00:13:24.225 ************************************ 00:13:24.225 11:04:32 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:13:24.225 11:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:24.225 11:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.225 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:24.225 ************************************ 00:13:24.225 START TEST even_2G_alloc 00:13:24.225 ************************************ 00:13:24.225 11:04:32 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:13:24.225 11:04:32 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:13:24.225 11:04:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:24.225 11:04:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:24.225 11:04:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:24.225 11:04:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:24.225 11:04:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:24.225 11:04:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:24.225 11:04:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:24.225 11:04:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:24.225 11:04:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:24.225 11:04:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:13:24.225 11:04:32 -- setup/hugepages.sh@83 -- # : 0 00:13:24.225 11:04:32 -- setup/hugepages.sh@84 -- # : 0 00:13:24.225 11:04:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:24.225 11:04:32 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:13:24.225 11:04:32 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:13:24.225 11:04:32 -- setup/hugepages.sh@153 -- # setup output 00:13:24.225 11:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:24.225 11:04:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.825 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.825 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.825 11:04:32 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:13:24.825 11:04:32 -- setup/hugepages.sh@89 -- # local node 00:13:24.825 11:04:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:24.825 11:04:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:24.825 11:04:32 -- setup/hugepages.sh@92 -- # local surp 00:13:24.825 11:04:32 -- setup/hugepages.sh@93 -- # local resv 00:13:24.825 11:04:32 -- setup/hugepages.sh@94 -- # local anon 00:13:24.825 11:04:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:24.825 11:04:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:24.826 11:04:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:24.826 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.826 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.826 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.826 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.826 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.826 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.826 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.826 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7378272 kB' 'MemAvailable: 9478148 kB' 'Buffers: 2436 kB' 'Cached: 2309564 kB' 'SwapCached: 0 kB' 'Active: 891560 kB' 'Inactive: 1543108 kB' 'Active(anon): 133132 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 984 kB' 'Writeback: 0 kB' 'AnonPages: 124280 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145824 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75264 kB' 'KernelStack: 6500 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.826 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.826 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:24.827 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.827 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.827 11:04:32 -- setup/hugepages.sh@97 -- # anon=0 00:13:24.827 11:04:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:24.827 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:24.827 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.827 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.827 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.827 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.827 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.827 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.827 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.827 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7378272 kB' 'MemAvailable: 9478148 kB' 'Buffers: 2436 kB' 'Cached: 2309564 kB' 'SwapCached: 0 kB' 'Active: 891556 kB' 'Inactive: 1543108 kB' 'Active(anon): 133128 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 984 kB' 'Writeback: 0 kB' 'AnonPages: 124224 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145824 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75264 kB' 'KernelStack: 6484 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.827 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.827 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.828 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.828 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.828 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.828 11:04:32 -- setup/hugepages.sh@99 -- # surp=0 00:13:24.828 11:04:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:24.828 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:24.828 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.828 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.828 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.828 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.828 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.828 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.828 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.828 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.828 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7378416 kB' 'MemAvailable: 9478292 kB' 'Buffers: 2436 kB' 'Cached: 2309564 kB' 'SwapCached: 0 kB' 'Active: 891168 kB' 'Inactive: 1543108 kB' 'Active(anon): 132740 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 984 kB' 'Writeback: 0 kB' 'AnonPages: 123884 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145824 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75264 kB' 'KernelStack: 6484 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.829 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.829 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:24.830 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.830 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.830 11:04:32 -- setup/hugepages.sh@100 -- # resv=0 00:13:24.830 nr_hugepages=1024 00:13:24.830 11:04:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:24.830 resv_hugepages=0 00:13:24.830 11:04:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:24.830 surplus_hugepages=0 00:13:24.830 11:04:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:24.830 anon_hugepages=0 00:13:24.830 11:04:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:24.830 11:04:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:24.830 11:04:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:24.830 11:04:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:24.830 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:24.830 11:04:32 -- setup/common.sh@18 -- # local node= 00:13:24.830 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.830 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.830 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.830 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:24.830 11:04:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:24.830 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.830 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7378164 kB' 'MemAvailable: 9478040 kB' 'Buffers: 2436 kB' 'Cached: 2309564 kB' 'SwapCached: 0 kB' 'Active: 890980 kB' 'Inactive: 1543108 kB' 'Active(anon): 132552 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 984 kB' 'Writeback: 0 kB' 'AnonPages: 123944 kB' 'Mapped: 48924 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145824 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75264 kB' 'KernelStack: 6512 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.830 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.830 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.831 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.831 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:24.832 11:04:32 -- setup/common.sh@33 -- # echo 1024 00:13:24.832 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.832 11:04:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:24.832 11:04:32 -- setup/hugepages.sh@112 -- # get_nodes 00:13:24.832 11:04:32 -- setup/hugepages.sh@27 -- # local node 00:13:24.832 11:04:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:24.832 11:04:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:24.832 11:04:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:24.832 11:04:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:24.832 11:04:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:24.832 11:04:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:24.832 11:04:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:24.832 11:04:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:24.832 11:04:32 -- setup/common.sh@18 -- # local node=0 00:13:24.832 11:04:32 -- setup/common.sh@19 -- # local var val 00:13:24.832 11:04:32 -- setup/common.sh@20 -- # local mem_f mem 00:13:24.832 11:04:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:24.832 11:04:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:24.832 11:04:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:24.832 11:04:32 -- setup/common.sh@28 -- # mapfile -t mem 00:13:24.832 11:04:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7378164 kB' 'MemUsed: 4863816 kB' 'SwapCached: 0 kB' 'Active: 891040 kB' 'Inactive: 1543108 kB' 'Active(anon): 132612 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 984 kB' 'Writeback: 0 kB' 'FilePages: 2312000 kB' 'Mapped: 48924 kB' 'AnonPages: 123752 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145824 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.832 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.832 11:04:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # continue 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # IFS=': ' 00:13:24.833 11:04:32 -- setup/common.sh@31 -- # read -r var val _ 00:13:24.833 11:04:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:24.833 11:04:32 -- setup/common.sh@33 -- # echo 0 00:13:24.833 11:04:32 -- setup/common.sh@33 -- # return 0 00:13:24.833 11:04:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:24.833 11:04:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:24.833 11:04:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:24.833 11:04:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:24.833 11:04:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:24.833 node0=1024 expecting 1024 00:13:24.833 11:04:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:24.833 00:13:24.833 real 0m0.524s 00:13:24.833 user 0m0.252s 00:13:24.833 sys 0m0.304s 00:13:24.833 11:04:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:24.833 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:24.833 ************************************ 00:13:24.833 END TEST even_2G_alloc 00:13:24.833 ************************************ 00:13:24.833 11:04:32 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:13:24.833 11:04:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:24.833 11:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.833 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:13:25.090 ************************************ 00:13:25.090 START TEST odd_alloc 00:13:25.090 ************************************ 00:13:25.090 11:04:33 -- common/autotest_common.sh@1111 -- # odd_alloc 00:13:25.090 11:04:33 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:13:25.090 11:04:33 -- setup/hugepages.sh@49 -- # local size=2098176 00:13:25.090 11:04:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:13:25.090 11:04:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:25.090 11:04:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:25.090 11:04:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:25.090 11:04:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:13:25.090 11:04:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:25.090 11:04:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:25.090 11:04:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:25.090 11:04:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:13:25.090 11:04:33 -- setup/hugepages.sh@83 -- # : 0 00:13:25.090 11:04:33 -- setup/hugepages.sh@84 -- # : 0 00:13:25.090 11:04:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:25.090 11:04:33 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:13:25.090 11:04:33 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:13:25.090 11:04:33 -- setup/hugepages.sh@160 -- # setup output 00:13:25.090 11:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:25.090 11:04:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:25.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.349 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.349 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.349 11:04:33 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:13:25.349 11:04:33 -- setup/hugepages.sh@89 -- # local node 00:13:25.349 11:04:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:25.349 11:04:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:25.349 11:04:33 -- setup/hugepages.sh@92 -- # local surp 00:13:25.349 11:04:33 -- setup/hugepages.sh@93 -- # local resv 00:13:25.349 11:04:33 -- setup/hugepages.sh@94 -- # local anon 00:13:25.349 11:04:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:25.349 11:04:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:25.349 11:04:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:25.349 11:04:33 -- setup/common.sh@18 -- # local node= 00:13:25.349 11:04:33 -- setup/common.sh@19 -- # local var val 00:13:25.349 11:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:13:25.349 11:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:25.349 11:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:25.349 11:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:25.349 11:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:13:25.349 11:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383408 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891580 kB' 'Inactive: 1543112 kB' 'Active(anon): 133152 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 124300 kB' 'Mapped: 49068 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145844 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75284 kB' 'KernelStack: 6500 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.349 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.349 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:25.350 11:04:33 -- setup/common.sh@33 -- # echo 0 00:13:25.350 11:04:33 -- setup/common.sh@33 -- # return 0 00:13:25.350 11:04:33 -- setup/hugepages.sh@97 -- # anon=0 00:13:25.350 11:04:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:25.350 11:04:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:25.350 11:04:33 -- setup/common.sh@18 -- # local node= 00:13:25.350 11:04:33 -- setup/common.sh@19 -- # local var val 00:13:25.350 11:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:13:25.350 11:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:25.350 11:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:25.350 11:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:25.350 11:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:13:25.350 11:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383408 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891404 kB' 'Inactive: 1543112 kB' 'Active(anon): 132976 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 124108 kB' 'Mapped: 49068 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145844 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75284 kB' 'KernelStack: 6468 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.350 11:04:33 -- setup/common.sh@33 -- # echo 0 00:13:25.350 11:04:33 -- setup/common.sh@33 -- # return 0 00:13:25.350 11:04:33 -- setup/hugepages.sh@99 -- # surp=0 00:13:25.350 11:04:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:25.350 11:04:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:25.350 11:04:33 -- setup/common.sh@18 -- # local node= 00:13:25.350 11:04:33 -- setup/common.sh@19 -- # local var val 00:13:25.350 11:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:13:25.350 11:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:25.350 11:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:25.350 11:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:25.350 11:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:13:25.350 11:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383408 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 890956 kB' 'Inactive: 1543112 kB' 'Active(anon): 132528 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 123944 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145844 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75284 kB' 'KernelStack: 6512 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.350 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.350 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:25.351 11:04:33 -- setup/common.sh@33 -- # echo 0 00:13:25.351 11:04:33 -- setup/common.sh@33 -- # return 0 00:13:25.351 11:04:33 -- setup/hugepages.sh@100 -- # resv=0 00:13:25.351 nr_hugepages=1025 00:13:25.351 11:04:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:13:25.351 resv_hugepages=0 00:13:25.351 11:04:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:25.351 surplus_hugepages=0 00:13:25.351 11:04:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:25.351 anon_hugepages=0 00:13:25.351 11:04:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:25.351 11:04:33 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:25.351 11:04:33 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:13:25.351 11:04:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:25.351 11:04:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:25.351 11:04:33 -- setup/common.sh@18 -- # local node= 00:13:25.351 11:04:33 -- setup/common.sh@19 -- # local var val 00:13:25.351 11:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:13:25.351 11:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:25.351 11:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:25.351 11:04:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:25.351 11:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:13:25.351 11:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383408 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891272 kB' 'Inactive: 1543112 kB' 'Active(anon): 132844 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 123948 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145844 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75284 kB' 'KernelStack: 6480 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.351 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.351 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:25.352 11:04:33 -- setup/common.sh@33 -- # echo 1025 00:13:25.352 11:04:33 -- setup/common.sh@33 -- # return 0 00:13:25.352 11:04:33 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:13:25.352 11:04:33 -- setup/hugepages.sh@112 -- # get_nodes 00:13:25.352 11:04:33 -- setup/hugepages.sh@27 -- # local node 00:13:25.352 11:04:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:25.352 11:04:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:13:25.352 11:04:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:25.352 11:04:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:25.352 11:04:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:25.352 11:04:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:25.352 11:04:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:25.352 11:04:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:25.352 11:04:33 -- setup/common.sh@18 -- # local node=0 00:13:25.352 11:04:33 -- setup/common.sh@19 -- # local var val 00:13:25.352 11:04:33 -- setup/common.sh@20 -- # local mem_f mem 00:13:25.352 11:04:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:25.352 11:04:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:25.352 11:04:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:25.352 11:04:33 -- setup/common.sh@28 -- # mapfile -t mem 00:13:25.352 11:04:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383408 kB' 'MemUsed: 4858572 kB' 'SwapCached: 0 kB' 'Active: 890928 kB' 'Inactive: 1543112 kB' 'Active(anon): 132500 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'FilePages: 2312004 kB' 'Mapped: 48944 kB' 'AnonPages: 123900 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145844 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.352 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.352 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.609 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.609 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # continue 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # IFS=': ' 00:13:25.610 11:04:33 -- setup/common.sh@31 -- # read -r var val _ 00:13:25.610 11:04:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:25.610 11:04:33 -- setup/common.sh@33 -- # echo 0 00:13:25.610 11:04:33 -- setup/common.sh@33 -- # return 0 00:13:25.610 11:04:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:25.610 11:04:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:25.610 11:04:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:25.610 11:04:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:25.610 node0=1025 expecting 1025 00:13:25.611 11:04:33 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:13:25.611 11:04:33 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:13:25.611 00:13:25.611 real 0m0.515s 00:13:25.611 user 0m0.257s 00:13:25.611 sys 0m0.294s 00:13:25.611 11:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.611 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:25.611 ************************************ 00:13:25.611 END TEST odd_alloc 00:13:25.611 ************************************ 00:13:25.611 11:04:33 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:13:25.611 11:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.611 11:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.611 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:13:25.611 ************************************ 00:13:25.611 START TEST custom_alloc 00:13:25.611 ************************************ 00:13:25.611 11:04:33 -- common/autotest_common.sh@1111 -- # custom_alloc 00:13:25.611 11:04:33 -- setup/hugepages.sh@167 -- # local IFS=, 00:13:25.611 11:04:33 -- setup/hugepages.sh@169 -- # local node 00:13:25.611 11:04:33 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:13:25.611 11:04:33 -- setup/hugepages.sh@170 -- # local nodes_hp 00:13:25.611 11:04:33 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:13:25.611 11:04:33 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:13:25.611 11:04:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:13:25.611 11:04:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:13:25.611 11:04:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:25.611 11:04:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:25.611 11:04:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:25.611 11:04:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:25.611 11:04:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:25.611 11:04:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@83 -- # : 0 00:13:25.611 11:04:33 -- setup/hugepages.sh@84 -- # : 0 00:13:25.611 11:04:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:13:25.611 11:04:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:13:25.611 11:04:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:13:25.611 11:04:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:13:25.611 11:04:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:25.611 11:04:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:25.611 11:04:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:25.611 11:04:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:25.611 11:04:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:13:25.611 11:04:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:13:25.611 11:04:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:13:25.611 11:04:33 -- setup/hugepages.sh@78 -- # return 0 00:13:25.611 11:04:33 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:13:25.611 11:04:33 -- setup/hugepages.sh@187 -- # setup output 00:13:25.611 11:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:25.611 11:04:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:25.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.870 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.870 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.870 11:04:34 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:13:25.870 11:04:34 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:13:25.870 11:04:34 -- setup/hugepages.sh@89 -- # local node 00:13:25.870 11:04:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:25.870 11:04:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:25.870 11:04:34 -- setup/hugepages.sh@92 -- # local surp 00:13:25.870 11:04:34 -- setup/hugepages.sh@93 -- # local resv 00:13:25.870 11:04:34 -- setup/hugepages.sh@94 -- # local anon 00:13:25.870 11:04:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:26.131 11:04:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:26.131 11:04:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:26.131 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.131 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.131 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.131 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.131 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.131 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.131 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.131 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.131 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8438396 kB' 'MemAvailable: 10538276 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891680 kB' 'Inactive: 1543112 kB' 'Active(anon): 133252 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1304 kB' 'Writeback: 0 kB' 'AnonPages: 124412 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145800 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75240 kB' 'KernelStack: 6500 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.132 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.132 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.133 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.133 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.133 11:04:34 -- setup/hugepages.sh@97 -- # anon=0 00:13:26.133 11:04:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:26.133 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:26.133 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.133 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.133 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.133 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.133 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.133 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.133 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.133 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8438396 kB' 'MemAvailable: 10538276 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891132 kB' 'Inactive: 1543112 kB' 'Active(anon): 132704 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1304 kB' 'Writeback: 0 kB' 'AnonPages: 123816 kB' 'Mapped: 49052 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145800 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75240 kB' 'KernelStack: 6452 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.133 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.133 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.134 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.134 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.134 11:04:34 -- setup/hugepages.sh@99 -- # surp=0 00:13:26.134 11:04:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:26.134 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:26.134 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.134 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.134 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.134 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.134 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.134 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.134 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.134 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8438144 kB' 'MemAvailable: 10538024 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891216 kB' 'Inactive: 1543112 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1304 kB' 'Writeback: 0 kB' 'AnonPages: 123668 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145800 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75240 kB' 'KernelStack: 6452 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.134 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.134 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.135 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.135 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.136 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.136 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.136 11:04:34 -- setup/hugepages.sh@100 -- # resv=0 00:13:26.136 nr_hugepages=512 00:13:26.136 11:04:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:13:26.136 resv_hugepages=0 00:13:26.136 11:04:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:26.136 surplus_hugepages=0 00:13:26.136 11:04:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:26.136 anon_hugepages=0 00:13:26.136 11:04:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:26.136 11:04:34 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:26.136 11:04:34 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:13:26.136 11:04:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:26.136 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:26.136 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.136 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.136 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.136 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.136 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.136 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.136 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.136 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8438144 kB' 'MemAvailable: 10538024 kB' 'Buffers: 2436 kB' 'Cached: 2309568 kB' 'SwapCached: 0 kB' 'Active: 891268 kB' 'Inactive: 1543112 kB' 'Active(anon): 132840 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1304 kB' 'Writeback: 0 kB' 'AnonPages: 124068 kB' 'Mapped: 49512 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145800 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75240 kB' 'KernelStack: 6500 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.136 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.136 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.137 11:04:34 -- setup/common.sh@33 -- # echo 512 00:13:26.137 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.137 11:04:34 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:13:26.137 11:04:34 -- setup/hugepages.sh@112 -- # get_nodes 00:13:26.137 11:04:34 -- setup/hugepages.sh@27 -- # local node 00:13:26.137 11:04:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:26.137 11:04:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:13:26.137 11:04:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:26.137 11:04:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:26.137 11:04:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:26.137 11:04:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:26.137 11:04:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:26.137 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:26.137 11:04:34 -- setup/common.sh@18 -- # local node=0 00:13:26.137 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.137 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.137 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.137 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:26.137 11:04:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:26.137 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.137 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8438144 kB' 'MemUsed: 3803836 kB' 'SwapCached: 0 kB' 'Active: 891324 kB' 'Inactive: 1543108 kB' 'Active(anon): 132896 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1304 kB' 'Writeback: 0 kB' 'FilePages: 2312000 kB' 'Mapped: 48992 kB' 'AnonPages: 124116 kB' 'Shmem: 10464 kB' 'KernelStack: 6452 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145800 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.137 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.137 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.138 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.138 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.138 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.138 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.138 11:04:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:26.138 11:04:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:26.138 11:04:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:26.138 11:04:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:26.138 11:04:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:13:26.138 node0=512 expecting 512 00:13:26.138 11:04:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:13:26.138 00:13:26.138 real 0m0.522s 00:13:26.138 user 0m0.269s 00:13:26.138 sys 0m0.290s 00:13:26.138 11:04:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:26.138 11:04:34 -- common/autotest_common.sh@10 -- # set +x 00:13:26.138 ************************************ 00:13:26.138 END TEST custom_alloc 00:13:26.138 ************************************ 00:13:26.138 11:04:34 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:13:26.138 11:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:26.138 11:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.138 11:04:34 -- common/autotest_common.sh@10 -- # set +x 00:13:26.138 ************************************ 00:13:26.138 START TEST no_shrink_alloc 00:13:26.138 ************************************ 00:13:26.138 11:04:34 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:13:26.138 11:04:34 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:13:26.138 11:04:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:13:26.138 11:04:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:13:26.138 11:04:34 -- setup/hugepages.sh@51 -- # shift 00:13:26.139 11:04:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:13:26.139 11:04:34 -- setup/hugepages.sh@52 -- # local node_ids 00:13:26.139 11:04:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:13:26.139 11:04:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:13:26.139 11:04:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:13:26.139 11:04:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:13:26.139 11:04:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:13:26.139 11:04:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:13:26.139 11:04:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:13:26.139 11:04:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:13:26.139 11:04:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:13:26.139 11:04:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:13:26.139 11:04:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:13:26.139 11:04:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:13:26.139 11:04:34 -- setup/hugepages.sh@73 -- # return 0 00:13:26.139 11:04:34 -- setup/hugepages.sh@198 -- # setup output 00:13:26.139 11:04:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:26.139 11:04:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:26.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:26.709 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:26.709 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:26.709 11:04:34 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:13:26.709 11:04:34 -- setup/hugepages.sh@89 -- # local node 00:13:26.709 11:04:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:26.709 11:04:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:26.709 11:04:34 -- setup/hugepages.sh@92 -- # local surp 00:13:26.709 11:04:34 -- setup/hugepages.sh@93 -- # local resv 00:13:26.709 11:04:34 -- setup/hugepages.sh@94 -- # local anon 00:13:26.709 11:04:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:26.709 11:04:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:26.709 11:04:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:26.709 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.709 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.709 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.709 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.709 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.709 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.709 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.709 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383100 kB' 'MemAvailable: 9483016 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891436 kB' 'Inactive: 1543148 kB' 'Active(anon): 133008 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 124436 kB' 'Mapped: 49228 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145780 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75220 kB' 'KernelStack: 6548 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.709 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.709 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:26.710 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.710 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.710 11:04:34 -- setup/hugepages.sh@97 -- # anon=0 00:13:26.710 11:04:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:26.710 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:26.710 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.710 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.710 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.710 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.710 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.710 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.710 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.710 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383100 kB' 'MemAvailable: 9483016 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891004 kB' 'Inactive: 1543148 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145776 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75216 kB' 'KernelStack: 6496 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.710 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.710 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.711 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.711 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.711 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.711 11:04:34 -- setup/hugepages.sh@99 -- # surp=0 00:13:26.711 11:04:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:26.711 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:26.711 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.711 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.711 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.711 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.711 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.711 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.711 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.711 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.711 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383100 kB' 'MemAvailable: 9483016 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891008 kB' 'Inactive: 1543148 kB' 'Active(anon): 132580 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 124008 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145776 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75216 kB' 'KernelStack: 6480 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.712 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.712 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:26.713 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.713 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.713 11:04:34 -- setup/hugepages.sh@100 -- # resv=0 00:13:26.713 nr_hugepages=1024 00:13:26.713 11:04:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:26.713 resv_hugepages=0 00:13:26.713 11:04:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:26.713 surplus_hugepages=0 00:13:26.713 11:04:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:26.713 anon_hugepages=0 00:13:26.713 11:04:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:26.713 11:04:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:26.713 11:04:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:26.713 11:04:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:26.713 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:26.713 11:04:34 -- setup/common.sh@18 -- # local node= 00:13:26.713 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.713 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.713 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.713 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:26.713 11:04:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:26.713 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.713 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383100 kB' 'MemAvailable: 9483016 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891036 kB' 'Inactive: 1543148 kB' 'Active(anon): 132608 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 124028 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145776 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75216 kB' 'KernelStack: 6496 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.713 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.713 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.714 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.714 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:26.715 11:04:34 -- setup/common.sh@33 -- # echo 1024 00:13:26.715 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.715 11:04:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:26.715 11:04:34 -- setup/hugepages.sh@112 -- # get_nodes 00:13:26.715 11:04:34 -- setup/hugepages.sh@27 -- # local node 00:13:26.715 11:04:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:26.715 11:04:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:26.715 11:04:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:26.715 11:04:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:26.715 11:04:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:26.715 11:04:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:26.715 11:04:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:26.715 11:04:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:26.715 11:04:34 -- setup/common.sh@18 -- # local node=0 00:13:26.715 11:04:34 -- setup/common.sh@19 -- # local var val 00:13:26.715 11:04:34 -- setup/common.sh@20 -- # local mem_f mem 00:13:26.715 11:04:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:26.715 11:04:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:26.715 11:04:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:26.715 11:04:34 -- setup/common.sh@28 -- # mapfile -t mem 00:13:26.715 11:04:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383100 kB' 'MemUsed: 4858880 kB' 'SwapCached: 0 kB' 'Active: 890972 kB' 'Inactive: 1543148 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'FilePages: 2312040 kB' 'Mapped: 48912 kB' 'AnonPages: 124012 kB' 'Shmem: 10464 kB' 'KernelStack: 6512 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145776 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.715 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.715 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # continue 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # IFS=': ' 00:13:26.716 11:04:34 -- setup/common.sh@31 -- # read -r var val _ 00:13:26.716 11:04:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:26.716 11:04:34 -- setup/common.sh@33 -- # echo 0 00:13:26.716 11:04:34 -- setup/common.sh@33 -- # return 0 00:13:26.716 11:04:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:26.716 11:04:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:26.716 11:04:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:26.716 11:04:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:26.716 node0=1024 expecting 1024 00:13:26.716 11:04:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:26.716 11:04:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:26.716 11:04:34 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:13:26.716 11:04:34 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:13:26.716 11:04:34 -- setup/hugepages.sh@202 -- # setup output 00:13:26.716 11:04:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:26.716 11:04:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:26.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:27.235 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:27.235 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:27.235 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:13:27.235 11:04:35 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:13:27.235 11:04:35 -- setup/hugepages.sh@89 -- # local node 00:13:27.235 11:04:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:13:27.235 11:04:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:13:27.235 11:04:35 -- setup/hugepages.sh@92 -- # local surp 00:13:27.235 11:04:35 -- setup/hugepages.sh@93 -- # local resv 00:13:27.235 11:04:35 -- setup/hugepages.sh@94 -- # local anon 00:13:27.235 11:04:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:13:27.235 11:04:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:13:27.235 11:04:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:13:27.235 11:04:35 -- setup/common.sh@18 -- # local node= 00:13:27.235 11:04:35 -- setup/common.sh@19 -- # local var val 00:13:27.235 11:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:13:27.235 11:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.235 11:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:27.235 11:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:27.235 11:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.235 11:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.235 11:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383536 kB' 'MemAvailable: 9483452 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891640 kB' 'Inactive: 1543148 kB' 'Active(anon): 133212 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 124616 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145808 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6516 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:27.235 11:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.235 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.235 11:04:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.235 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.235 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.235 11:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.236 11:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:13:27.236 11:04:35 -- setup/common.sh@33 -- # echo 0 00:13:27.236 11:04:35 -- setup/common.sh@33 -- # return 0 00:13:27.236 11:04:35 -- setup/hugepages.sh@97 -- # anon=0 00:13:27.236 11:04:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:13:27.236 11:04:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:27.236 11:04:35 -- setup/common.sh@18 -- # local node= 00:13:27.236 11:04:35 -- setup/common.sh@19 -- # local var val 00:13:27.236 11:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:13:27.236 11:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.236 11:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:27.236 11:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:27.236 11:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.236 11:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.236 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383784 kB' 'MemAvailable: 9483700 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891004 kB' 'Inactive: 1543148 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 123756 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145808 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6512 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.237 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.237 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.238 11:04:35 -- setup/common.sh@33 -- # echo 0 00:13:27.238 11:04:35 -- setup/common.sh@33 -- # return 0 00:13:27.238 11:04:35 -- setup/hugepages.sh@99 -- # surp=0 00:13:27.238 11:04:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:13:27.238 11:04:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:13:27.238 11:04:35 -- setup/common.sh@18 -- # local node= 00:13:27.238 11:04:35 -- setup/common.sh@19 -- # local var val 00:13:27.238 11:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:13:27.238 11:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.238 11:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:27.238 11:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:27.238 11:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.238 11:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383784 kB' 'MemAvailable: 9483700 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891040 kB' 'Inactive: 1543148 kB' 'Active(anon): 132612 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 123784 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145808 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6512 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.238 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.238 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.239 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:13:27.239 11:04:35 -- setup/common.sh@33 -- # echo 0 00:13:27.239 11:04:35 -- setup/common.sh@33 -- # return 0 00:13:27.239 11:04:35 -- setup/hugepages.sh@100 -- # resv=0 00:13:27.239 nr_hugepages=1024 00:13:27.239 11:04:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:13:27.239 11:04:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:13:27.239 resv_hugepages=0 00:13:27.239 surplus_hugepages=0 00:13:27.239 11:04:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:13:27.239 anon_hugepages=0 00:13:27.239 11:04:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:13:27.239 11:04:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:27.239 11:04:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:13:27.239 11:04:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:13:27.239 11:04:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:13:27.239 11:04:35 -- setup/common.sh@18 -- # local node= 00:13:27.239 11:04:35 -- setup/common.sh@19 -- # local var val 00:13:27.239 11:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:13:27.239 11:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.239 11:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:13:27.239 11:04:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:13:27.239 11:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.239 11:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.239 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7383784 kB' 'MemAvailable: 9483700 kB' 'Buffers: 2436 kB' 'Cached: 2309604 kB' 'SwapCached: 0 kB' 'Active: 891064 kB' 'Inactive: 1543148 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 124068 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 70560 kB' 'Slab: 145808 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6512 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.240 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.240 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:13:27.241 11:04:35 -- setup/common.sh@33 -- # echo 1024 00:13:27.241 11:04:35 -- setup/common.sh@33 -- # return 0 00:13:27.241 11:04:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:13:27.241 11:04:35 -- setup/hugepages.sh@112 -- # get_nodes 00:13:27.241 11:04:35 -- setup/hugepages.sh@27 -- # local node 00:13:27.241 11:04:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:13:27.241 11:04:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:13:27.241 11:04:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:13:27.241 11:04:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:13:27.241 11:04:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:13:27.241 11:04:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:13:27.241 11:04:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:13:27.241 11:04:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:13:27.241 11:04:35 -- setup/common.sh@18 -- # local node=0 00:13:27.241 11:04:35 -- setup/common.sh@19 -- # local var val 00:13:27.241 11:04:35 -- setup/common.sh@20 -- # local mem_f mem 00:13:27.241 11:04:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:13:27.241 11:04:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:13:27.241 11:04:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:13:27.241 11:04:35 -- setup/common.sh@28 -- # mapfile -t mem 00:13:27.241 11:04:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7385736 kB' 'MemUsed: 4856244 kB' 'SwapCached: 0 kB' 'Active: 886712 kB' 'Inactive: 1543148 kB' 'Active(anon): 128284 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1543148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'FilePages: 2312040 kB' 'Mapped: 48272 kB' 'AnonPages: 119444 kB' 'Shmem: 10464 kB' 'KernelStack: 6448 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70560 kB' 'Slab: 145804 kB' 'SReclaimable: 70560 kB' 'SUnreclaim: 75244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.241 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.241 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # continue 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # IFS=': ' 00:13:27.242 11:04:35 -- setup/common.sh@31 -- # read -r var val _ 00:13:27.242 11:04:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:13:27.242 11:04:35 -- setup/common.sh@33 -- # echo 0 00:13:27.242 11:04:35 -- setup/common.sh@33 -- # return 0 00:13:27.242 11:04:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:13:27.242 11:04:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:13:27.242 11:04:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:13:27.242 11:04:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:13:27.242 node0=1024 expecting 1024 00:13:27.242 11:04:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:13:27.242 11:04:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:13:27.242 00:13:27.242 real 0m1.056s 00:13:27.242 user 0m0.547s 00:13:27.242 sys 0m0.572s 00:13:27.242 11:04:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:27.242 11:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.242 ************************************ 00:13:27.242 END TEST no_shrink_alloc 00:13:27.242 ************************************ 00:13:27.242 11:04:35 -- setup/hugepages.sh@217 -- # clear_hp 00:13:27.242 11:04:35 -- setup/hugepages.sh@37 -- # local node hp 00:13:27.242 11:04:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:13:27.242 11:04:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:27.242 11:04:35 -- setup/hugepages.sh@41 -- # echo 0 00:13:27.242 11:04:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:13:27.242 11:04:35 -- setup/hugepages.sh@41 -- # echo 0 00:13:27.242 11:04:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:13:27.242 11:04:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:13:27.242 00:13:27.242 real 0m4.983s 00:13:27.242 user 0m2.377s 00:13:27.242 sys 0m2.690s 00:13:27.242 11:04:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:27.242 11:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.242 ************************************ 00:13:27.242 END TEST hugepages 00:13:27.242 ************************************ 00:13:27.500 11:04:35 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:27.500 11:04:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:27.500 11:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.500 11:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.500 ************************************ 00:13:27.500 START TEST driver 00:13:27.500 ************************************ 00:13:27.500 11:04:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:13:27.500 * Looking for test storage... 00:13:27.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:27.500 11:04:35 -- setup/driver.sh@68 -- # setup reset 00:13:27.500 11:04:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:27.500 11:04:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.067 11:04:36 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:13:28.067 11:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:28.067 11:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:28.067 11:04:36 -- common/autotest_common.sh@10 -- # set +x 00:13:28.325 ************************************ 00:13:28.325 START TEST guess_driver 00:13:28.325 ************************************ 00:13:28.325 11:04:36 -- common/autotest_common.sh@1111 -- # guess_driver 00:13:28.325 11:04:36 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:13:28.325 11:04:36 -- setup/driver.sh@47 -- # local fail=0 00:13:28.325 11:04:36 -- setup/driver.sh@49 -- # pick_driver 00:13:28.325 11:04:36 -- setup/driver.sh@36 -- # vfio 00:13:28.325 11:04:36 -- setup/driver.sh@21 -- # local iommu_grups 00:13:28.325 11:04:36 -- setup/driver.sh@22 -- # local unsafe_vfio 00:13:28.325 11:04:36 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:13:28.325 11:04:36 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:13:28.325 11:04:36 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:13:28.325 11:04:36 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:13:28.325 11:04:36 -- setup/driver.sh@32 -- # return 1 00:13:28.325 11:04:36 -- setup/driver.sh@38 -- # uio 00:13:28.325 11:04:36 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:13:28.325 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:13:28.325 11:04:36 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:13:28.325 Looking for driver=uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:13:28.325 11:04:36 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:13:28.325 11:04:36 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:13:28.325 11:04:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:28.325 11:04:36 -- setup/driver.sh@45 -- # setup output config 00:13:28.325 11:04:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:28.325 11:04:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:28.891 11:04:36 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:13:28.891 11:04:36 -- setup/driver.sh@58 -- # continue 00:13:28.891 11:04:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:28.891 11:04:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:28.891 11:04:37 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:13:28.891 11:04:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:29.149 11:04:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:13:29.149 11:04:37 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:13:29.149 11:04:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:13:29.149 11:04:37 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:13:29.149 11:04:37 -- setup/driver.sh@65 -- # setup reset 00:13:29.149 11:04:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:29.149 11:04:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:29.716 00:13:29.716 real 0m1.471s 00:13:29.716 user 0m0.554s 00:13:29.716 sys 0m0.922s 00:13:29.716 11:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.716 11:04:37 -- common/autotest_common.sh@10 -- # set +x 00:13:29.716 ************************************ 00:13:29.716 END TEST guess_driver 00:13:29.716 ************************************ 00:13:29.716 00:13:29.716 real 0m2.249s 00:13:29.716 user 0m0.805s 00:13:29.716 sys 0m1.492s 00:13:29.716 11:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.716 11:04:37 -- common/autotest_common.sh@10 -- # set +x 00:13:29.716 ************************************ 00:13:29.716 END TEST driver 00:13:29.716 ************************************ 00:13:29.716 11:04:37 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:29.716 11:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:29.716 11:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.716 11:04:37 -- common/autotest_common.sh@10 -- # set +x 00:13:29.716 ************************************ 00:13:29.716 START TEST devices 00:13:29.716 ************************************ 00:13:29.716 11:04:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:13:29.974 * Looking for test storage... 00:13:29.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:13:29.974 11:04:38 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:13:29.974 11:04:38 -- setup/devices.sh@192 -- # setup reset 00:13:29.974 11:04:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:13:29.974 11:04:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:30.541 11:04:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:13:30.541 11:04:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:30.541 11:04:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:30.541 11:04:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:30.541 11:04:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.541 11:04:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:30.541 11:04:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:30.541 11:04:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.541 11:04:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:13:30.541 11:04:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:13:30.541 11:04:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.541 11:04:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:13:30.541 11:04:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:13:30.541 11:04:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.541 11:04:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:30.541 11:04:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:30.541 11:04:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:30.541 11:04:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.541 11:04:38 -- setup/devices.sh@196 -- # blocks=() 00:13:30.541 11:04:38 -- setup/devices.sh@196 -- # declare -a blocks 00:13:30.541 11:04:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:13:30.541 11:04:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:13:30.541 11:04:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:13:30.541 11:04:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:30.541 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:13:30.541 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:30.541 11:04:38 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:30.541 11:04:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:30.541 11:04:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:13:30.541 11:04:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:13:30.541 11:04:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:13:30.800 No valid GPT data, bailing 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # pt= 00:13:30.800 11:04:38 -- scripts/common.sh@392 -- # return 1 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:13:30.800 11:04:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:30.800 11:04:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:30.800 11:04:38 -- setup/common.sh@80 -- # echo 4294967296 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:30.800 11:04:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:30.800 11:04:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:30.800 11:04:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:30.800 11:04:38 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:30.800 11:04:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:13:30.800 11:04:38 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:13:30.800 11:04:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:13:30.800 No valid GPT data, bailing 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # pt= 00:13:30.800 11:04:38 -- scripts/common.sh@392 -- # return 1 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:13:30.800 11:04:38 -- setup/common.sh@76 -- # local dev=nvme0n2 00:13:30.800 11:04:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:13:30.800 11:04:38 -- setup/common.sh@80 -- # echo 4294967296 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:30.800 11:04:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:30.800 11:04:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:30.800 11:04:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:13:30.800 11:04:38 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:13:30.800 11:04:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:13:30.800 11:04:38 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:13:30.800 11:04:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:13:30.800 No valid GPT data, bailing 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:13:30.800 11:04:38 -- scripts/common.sh@391 -- # pt= 00:13:30.800 11:04:38 -- scripts/common.sh@392 -- # return 1 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:13:30.800 11:04:38 -- setup/common.sh@76 -- # local dev=nvme0n3 00:13:30.800 11:04:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:13:30.800 11:04:38 -- setup/common.sh@80 -- # echo 4294967296 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:13:30.800 11:04:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:30.800 11:04:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:13:30.800 11:04:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:13:30.800 11:04:38 -- setup/devices.sh@201 -- # ctrl=nvme1 00:13:30.800 11:04:38 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:13:30.800 11:04:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:13:30.800 11:04:38 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:13:30.800 11:04:38 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:13:30.800 11:04:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:13:31.058 No valid GPT data, bailing 00:13:31.058 11:04:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:31.058 11:04:39 -- scripts/common.sh@391 -- # pt= 00:13:31.058 11:04:39 -- scripts/common.sh@392 -- # return 1 00:13:31.058 11:04:39 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:13:31.058 11:04:39 -- setup/common.sh@76 -- # local dev=nvme1n1 00:13:31.058 11:04:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:13:31.058 11:04:39 -- setup/common.sh@80 -- # echo 5368709120 00:13:31.058 11:04:39 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:13:31.058 11:04:39 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:13:31.058 11:04:39 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:13:31.058 11:04:39 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:13:31.058 11:04:39 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:13:31.058 11:04:39 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:13:31.058 11:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:31.058 11:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.058 11:04:39 -- common/autotest_common.sh@10 -- # set +x 00:13:31.058 ************************************ 00:13:31.058 START TEST nvme_mount 00:13:31.058 ************************************ 00:13:31.058 11:04:39 -- common/autotest_common.sh@1111 -- # nvme_mount 00:13:31.058 11:04:39 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:13:31.058 11:04:39 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:13:31.058 11:04:39 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:31.058 11:04:39 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:31.058 11:04:39 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:13:31.058 11:04:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:31.058 11:04:39 -- setup/common.sh@40 -- # local part_no=1 00:13:31.058 11:04:39 -- setup/common.sh@41 -- # local size=1073741824 00:13:31.058 11:04:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:31.058 11:04:39 -- setup/common.sh@44 -- # parts=() 00:13:31.058 11:04:39 -- setup/common.sh@44 -- # local parts 00:13:31.058 11:04:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:13:31.058 11:04:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:31.058 11:04:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:31.058 11:04:39 -- setup/common.sh@46 -- # (( part++ )) 00:13:31.058 11:04:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:31.058 11:04:39 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:31.058 11:04:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:31.058 11:04:39 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:13:31.993 Creating new GPT entries in memory. 00:13:31.993 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:31.993 other utilities. 00:13:31.993 11:04:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:13:31.993 11:04:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:31.993 11:04:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:31.993 11:04:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:31.993 11:04:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:13:33.405 Creating new GPT entries in memory. 00:13:33.405 The operation has completed successfully. 00:13:33.405 11:04:41 -- setup/common.sh@57 -- # (( part++ )) 00:13:33.405 11:04:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:33.405 11:04:41 -- setup/common.sh@62 -- # wait 58392 00:13:33.405 11:04:41 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.405 11:04:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:13:33.405 11:04:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.405 11:04:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:13:33.405 11:04:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:13:33.405 11:04:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.405 11:04:41 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:33.405 11:04:41 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:33.405 11:04:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:13:33.405 11:04:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.405 11:04:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:33.405 11:04:41 -- setup/devices.sh@53 -- # local found=0 00:13:33.405 11:04:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:33.405 11:04:41 -- setup/devices.sh@56 -- # : 00:13:33.405 11:04:41 -- setup/devices.sh@59 -- # local pci status 00:13:33.405 11:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.405 11:04:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:33.405 11:04:41 -- setup/devices.sh@47 -- # setup output config 00:13:33.405 11:04:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:33.405 11:04:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:33.405 11:04:41 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:33.405 11:04:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:13:33.405 11:04:41 -- setup/devices.sh@63 -- # found=1 00:13:33.405 11:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.405 11:04:41 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:33.405 11:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.663 11:04:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:33.663 11:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.663 11:04:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:33.663 11:04:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.663 11:04:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:33.663 11:04:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:33.663 11:04:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.663 11:04:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:33.663 11:04:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:33.663 11:04:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:13:33.663 11:04:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.663 11:04:41 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.663 11:04:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:33.663 11:04:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:33.663 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:33.663 11:04:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:33.663 11:04:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:33.921 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:33.921 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:33.921 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:33.921 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:33.921 11:04:42 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:13:33.921 11:04:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:13:33.921 11:04:42 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.921 11:04:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:13:33.921 11:04:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:13:33.921 11:04:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.921 11:04:42 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:33.921 11:04:42 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:33.921 11:04:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:13:33.921 11:04:42 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:33.921 11:04:42 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:33.921 11:04:42 -- setup/devices.sh@53 -- # local found=0 00:13:33.921 11:04:42 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:33.921 11:04:42 -- setup/devices.sh@56 -- # : 00:13:33.921 11:04:42 -- setup/devices.sh@59 -- # local pci status 00:13:33.921 11:04:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:33.921 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:33.921 11:04:42 -- setup/devices.sh@47 -- # setup output config 00:13:33.921 11:04:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:33.921 11:04:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:34.179 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.180 11:04:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:13:34.180 11:04:42 -- setup/devices.sh@63 -- # found=1 00:13:34.180 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.180 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.180 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.438 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.438 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.438 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.438 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.438 11:04:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:34.438 11:04:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:13:34.438 11:04:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:34.438 11:04:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:13:34.438 11:04:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:13:34.438 11:04:42 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:34.438 11:04:42 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:13:34.438 11:04:42 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:34.438 11:04:42 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:13:34.438 11:04:42 -- setup/devices.sh@50 -- # local mount_point= 00:13:34.438 11:04:42 -- setup/devices.sh@51 -- # local test_file= 00:13:34.438 11:04:42 -- setup/devices.sh@53 -- # local found=0 00:13:34.438 11:04:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:34.438 11:04:42 -- setup/devices.sh@59 -- # local pci status 00:13:34.438 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.438 11:04:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:34.438 11:04:42 -- setup/devices.sh@47 -- # setup output config 00:13:34.438 11:04:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:34.438 11:04:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:34.696 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.696 11:04:42 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:13:34.696 11:04:42 -- setup/devices.sh@63 -- # found=1 00:13:34.696 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.696 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.696 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.955 11:04:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.955 11:04:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.955 11:04:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.955 11:04:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:34.955 11:04:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:34.955 11:04:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:34.955 11:04:43 -- setup/devices.sh@68 -- # return 0 00:13:34.955 11:04:43 -- setup/devices.sh@128 -- # cleanup_nvme 00:13:34.955 11:04:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:34.955 11:04:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:34.955 11:04:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:34.955 11:04:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:34.955 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:34.955 00:13:34.955 real 0m3.998s 00:13:34.955 user 0m0.676s 00:13:34.955 sys 0m1.014s 00:13:34.955 11:04:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:34.955 ************************************ 00:13:34.955 END TEST nvme_mount 00:13:34.955 11:04:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.955 ************************************ 00:13:35.214 11:04:43 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:13:35.214 11:04:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:35.214 11:04:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.214 11:04:43 -- common/autotest_common.sh@10 -- # set +x 00:13:35.214 ************************************ 00:13:35.214 START TEST dm_mount 00:13:35.214 ************************************ 00:13:35.214 11:04:43 -- common/autotest_common.sh@1111 -- # dm_mount 00:13:35.214 11:04:43 -- setup/devices.sh@144 -- # pv=nvme0n1 00:13:35.214 11:04:43 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:13:35.214 11:04:43 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:13:35.214 11:04:43 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:13:35.214 11:04:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:13:35.214 11:04:43 -- setup/common.sh@40 -- # local part_no=2 00:13:35.214 11:04:43 -- setup/common.sh@41 -- # local size=1073741824 00:13:35.214 11:04:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:13:35.214 11:04:43 -- setup/common.sh@44 -- # parts=() 00:13:35.214 11:04:43 -- setup/common.sh@44 -- # local parts 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:35.214 11:04:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part++ )) 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:35.214 11:04:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part++ )) 00:13:35.214 11:04:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:13:35.214 11:04:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:13:35.214 11:04:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:13:35.214 11:04:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:13:36.148 Creating new GPT entries in memory. 00:13:36.148 GPT data structures destroyed! You may now partition the disk using fdisk or 00:13:36.148 other utilities. 00:13:36.148 11:04:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:13:36.148 11:04:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:36.148 11:04:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:36.148 11:04:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:36.148 11:04:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:13:37.519 Creating new GPT entries in memory. 00:13:37.519 The operation has completed successfully. 00:13:37.519 11:04:45 -- setup/common.sh@57 -- # (( part++ )) 00:13:37.519 11:04:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:37.519 11:04:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:13:37.519 11:04:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:13:37.519 11:04:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:13:38.450 The operation has completed successfully. 00:13:38.450 11:04:46 -- setup/common.sh@57 -- # (( part++ )) 00:13:38.450 11:04:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:13:38.450 11:04:46 -- setup/common.sh@62 -- # wait 58829 00:13:38.450 11:04:46 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:13:38.450 11:04:46 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.450 11:04:46 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:38.450 11:04:46 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:13:38.450 11:04:46 -- setup/devices.sh@160 -- # for t in {1..5} 00:13:38.450 11:04:46 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:38.450 11:04:46 -- setup/devices.sh@161 -- # break 00:13:38.450 11:04:46 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:38.450 11:04:46 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:13:38.450 11:04:46 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:13:38.450 11:04:46 -- setup/devices.sh@166 -- # dm=dm-0 00:13:38.450 11:04:46 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:13:38.450 11:04:46 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:13:38.450 11:04:46 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.450 11:04:46 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:13:38.450 11:04:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.450 11:04:46 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:13:38.450 11:04:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:13:38.450 11:04:46 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.450 11:04:46 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:38.450 11:04:46 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:38.450 11:04:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:13:38.450 11:04:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.450 11:04:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:38.450 11:04:46 -- setup/devices.sh@53 -- # local found=0 00:13:38.450 11:04:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:13:38.450 11:04:46 -- setup/devices.sh@56 -- # : 00:13:38.450 11:04:46 -- setup/devices.sh@59 -- # local pci status 00:13:38.450 11:04:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:38.450 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.450 11:04:46 -- setup/devices.sh@47 -- # setup output config 00:13:38.450 11:04:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:38.450 11:04:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:38.450 11:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.450 11:04:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:13:38.450 11:04:46 -- setup/devices.sh@63 -- # found=1 00:13:38.450 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.450 11:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.450 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.708 11:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.708 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.708 11:04:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.708 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.965 11:04:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:38.965 11:04:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:13:38.965 11:04:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.965 11:04:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:13:38.965 11:04:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:13:38.965 11:04:46 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:38.965 11:04:46 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:13:38.965 11:04:46 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:13:38.966 11:04:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:13:38.966 11:04:46 -- setup/devices.sh@50 -- # local mount_point= 00:13:38.966 11:04:46 -- setup/devices.sh@51 -- # local test_file= 00:13:38.966 11:04:46 -- setup/devices.sh@53 -- # local found=0 00:13:38.966 11:04:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:13:38.966 11:04:46 -- setup/devices.sh@59 -- # local pci status 00:13:38.966 11:04:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.966 11:04:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:13:38.966 11:04:46 -- setup/devices.sh@47 -- # setup output config 00:13:38.966 11:04:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:13:38.966 11:04:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:13:38.966 11:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.966 11:04:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:13:38.966 11:04:47 -- setup/devices.sh@63 -- # found=1 00:13:38.966 11:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:38.966 11:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.966 11:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.223 11:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.223 11:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.223 11:04:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:39.223 11:04:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:13:39.223 11:04:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:13:39.223 11:04:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:13:39.223 11:04:47 -- setup/devices.sh@68 -- # return 0 00:13:39.223 11:04:47 -- setup/devices.sh@187 -- # cleanup_dm 00:13:39.223 11:04:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:39.223 11:04:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:39.223 11:04:47 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:13:39.480 11:04:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:39.480 11:04:47 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:13:39.480 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:13:39.480 11:04:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:39.480 11:04:47 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:13:39.480 00:13:39.480 real 0m4.233s 00:13:39.480 user 0m0.472s 00:13:39.480 sys 0m0.700s 00:13:39.480 11:04:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.480 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:39.480 ************************************ 00:13:39.480 END TEST dm_mount 00:13:39.480 ************************************ 00:13:39.480 11:04:47 -- setup/devices.sh@1 -- # cleanup 00:13:39.480 11:04:47 -- setup/devices.sh@11 -- # cleanup_nvme 00:13:39.480 11:04:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:13:39.480 11:04:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:39.480 11:04:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:13:39.480 11:04:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:13:39.480 11:04:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:13:39.738 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:39.738 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:39.738 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:39.738 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:39.738 11:04:47 -- setup/devices.sh@12 -- # cleanup_dm 00:13:39.738 11:04:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:13:39.738 11:04:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:13:39.738 11:04:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:13:39.738 11:04:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:13:39.738 11:04:47 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:13:39.738 11:04:47 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:13:39.738 00:13:39.738 real 0m9.877s 00:13:39.738 user 0m1.818s 00:13:39.738 sys 0m2.376s 00:13:39.738 ************************************ 00:13:39.738 END TEST devices 00:13:39.738 ************************************ 00:13:39.738 11:04:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.738 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:39.738 00:13:39.738 real 0m22.520s 00:13:39.738 user 0m7.243s 00:13:39.738 sys 0m9.564s 00:13:39.738 11:04:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.738 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:39.738 ************************************ 00:13:39.738 END TEST setup.sh 00:13:39.738 ************************************ 00:13:39.738 11:04:47 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:40.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:40.304 Hugepages 00:13:40.304 node hugesize free / total 00:13:40.304 node0 1048576kB 0 / 0 00:13:40.304 node0 2048kB 2048 / 2048 00:13:40.304 00:13:40.304 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:40.562 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:40.562 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:13:40.562 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:13:40.562 11:04:48 -- spdk/autotest.sh@130 -- # uname -s 00:13:40.562 11:04:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:13:40.562 11:04:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:13:40.562 11:04:48 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:41.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.385 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.385 11:04:49 -- common/autotest_common.sh@1518 -- # sleep 1 00:13:42.357 11:04:50 -- common/autotest_common.sh@1519 -- # bdfs=() 00:13:42.357 11:04:50 -- common/autotest_common.sh@1519 -- # local bdfs 00:13:42.357 11:04:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:13:42.357 11:04:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:13:42.357 11:04:50 -- common/autotest_common.sh@1499 -- # bdfs=() 00:13:42.357 11:04:50 -- common/autotest_common.sh@1499 -- # local bdfs 00:13:42.357 11:04:50 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:42.357 11:04:50 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:42.357 11:04:50 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:13:42.615 11:04:50 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:13:42.615 11:04:50 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:42.615 11:04:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:42.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:42.873 Waiting for block devices as requested 00:13:42.873 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.873 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:43.131 11:04:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:13:43.131 11:04:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:13:43.131 11:04:51 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:13:43.131 11:04:51 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:13:43.131 11:04:51 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:13:43.131 11:04:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:13:43.131 11:04:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:13:43.131 11:04:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:13:43.131 11:04:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:13:43.131 11:04:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:13:43.131 11:04:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:13:43.131 11:04:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:13:43.131 11:04:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:13:43.131 11:04:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:13:43.131 11:04:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:13:43.131 11:04:51 -- common/autotest_common.sh@1543 -- # continue 00:13:43.131 11:04:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:13:43.131 11:04:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:13:43.131 11:04:51 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:13:43.131 11:04:51 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:43.132 11:04:51 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:13:43.132 11:04:51 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:13:43.132 11:04:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:13:43.132 11:04:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:13:43.132 11:04:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:13:43.132 11:04:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:13:43.132 11:04:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:13:43.132 11:04:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:13:43.132 11:04:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:13:43.132 11:04:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:13:43.132 11:04:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:13:43.132 11:04:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:13:43.132 11:04:51 -- common/autotest_common.sh@1543 -- # continue 00:13:43.132 11:04:51 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:13:43.132 11:04:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:43.132 11:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 11:04:51 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:13:43.132 11:04:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.132 11:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:43.132 11:04:51 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:43.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:43.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:43.957 11:04:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:43.957 11:04:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:43.957 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.957 11:04:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:43.957 11:04:52 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:13:43.957 11:04:52 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:13:43.957 11:04:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:13:43.957 11:04:52 -- common/autotest_common.sh@1563 -- # local bdfs 00:13:43.957 11:04:52 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:13:43.957 11:04:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:13:43.957 11:04:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:13:43.957 11:04:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:43.957 11:04:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:43.957 11:04:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:13:43.957 11:04:52 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:13:43.957 11:04:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:43.957 11:04:52 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:13:43.957 11:04:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:13:43.957 11:04:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:13:43.957 11:04:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:43.957 11:04:52 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:13:43.957 11:04:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:13:43.957 11:04:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:13:43.957 11:04:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:43.957 11:04:52 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:13:43.957 11:04:52 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:13:43.957 11:04:52 -- common/autotest_common.sh@1579 -- # return 0 00:13:43.957 11:04:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:43.957 11:04:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:43.957 11:04:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:43.957 11:04:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:43.957 11:04:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:43.957 11:04:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.957 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.957 11:04:52 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:43.957 11:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.957 11:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.957 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.216 ************************************ 00:13:44.216 START TEST env 00:13:44.216 ************************************ 00:13:44.216 11:04:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:44.216 * Looking for test storage... 00:13:44.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:13:44.216 11:04:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:44.216 11:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.216 11:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.216 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.216 ************************************ 00:13:44.216 START TEST env_memory 00:13:44.216 ************************************ 00:13:44.216 11:04:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:44.216 00:13:44.216 00:13:44.216 CUnit - A unit testing framework for C - Version 2.1-3 00:13:44.216 http://cunit.sourceforge.net/ 00:13:44.216 00:13:44.216 00:13:44.216 Suite: memory 00:13:44.474 Test: alloc and free memory map ...[2024-04-18 11:04:52.474751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:44.474 passed 00:13:44.474 Test: mem map translation ...[2024-04-18 11:04:52.536206] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:44.474 [2024-04-18 11:04:52.536300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:44.474 [2024-04-18 11:04:52.536399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:44.474 [2024-04-18 11:04:52.536433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:44.474 passed 00:13:44.474 Test: mem map registration ...[2024-04-18 11:04:52.634867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:44.474 [2024-04-18 11:04:52.634957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:44.474 passed 00:13:44.732 Test: mem map adjacent registrations ...passed 00:13:44.732 00:13:44.732 Run Summary: Type Total Ran Passed Failed Inactive 00:13:44.732 suites 1 1 n/a 0 0 00:13:44.732 tests 4 4 4 0 0 00:13:44.732 asserts 152 152 152 0 n/a 00:13:44.732 00:13:44.732 Elapsed time = 0.348 seconds 00:13:44.732 00:13:44.732 real 0m0.387s 00:13:44.732 user 0m0.362s 00:13:44.732 sys 0m0.023s 00:13:44.732 11:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:44.732 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.732 ************************************ 00:13:44.732 END TEST env_memory 00:13:44.732 ************************************ 00:13:44.732 11:04:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:44.732 11:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.732 11:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.732 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:44.732 ************************************ 00:13:44.732 START TEST env_vtophys 00:13:44.732 ************************************ 00:13:44.732 11:04:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:44.732 EAL: lib.eal log level changed from notice to debug 00:13:44.732 EAL: Detected lcore 0 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 1 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 2 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 3 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 4 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 5 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 6 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 7 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 8 as core 0 on socket 0 00:13:44.732 EAL: Detected lcore 9 as core 0 on socket 0 00:13:44.991 EAL: Maximum logical cores by configuration: 128 00:13:44.991 EAL: Detected CPU lcores: 10 00:13:44.991 EAL: Detected NUMA nodes: 1 00:13:44.991 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:13:44.991 EAL: Detected shared linkage of DPDK 00:13:44.991 EAL: No shared files mode enabled, IPC will be disabled 00:13:44.991 EAL: Selected IOVA mode 'PA' 00:13:44.991 EAL: Probing VFIO support... 00:13:44.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:44.991 EAL: VFIO modules not loaded, skipping VFIO support... 00:13:44.991 EAL: Ask a virtual area of 0x2e000 bytes 00:13:44.991 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:44.991 EAL: Setting up physically contiguous memory... 00:13:44.991 EAL: Setting maximum number of open files to 524288 00:13:44.991 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:44.991 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:44.991 EAL: Ask a virtual area of 0x61000 bytes 00:13:44.991 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:44.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:44.991 EAL: Ask a virtual area of 0x400000000 bytes 00:13:44.991 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:44.991 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:44.991 EAL: Ask a virtual area of 0x61000 bytes 00:13:44.991 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:44.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:44.991 EAL: Ask a virtual area of 0x400000000 bytes 00:13:44.991 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:44.991 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:44.991 EAL: Ask a virtual area of 0x61000 bytes 00:13:44.991 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:44.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:44.991 EAL: Ask a virtual area of 0x400000000 bytes 00:13:44.991 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:44.991 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:44.991 EAL: Ask a virtual area of 0x61000 bytes 00:13:44.991 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:44.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:44.991 EAL: Ask a virtual area of 0x400000000 bytes 00:13:44.991 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:44.991 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:44.991 EAL: Hugepages will be freed exactly as allocated. 00:13:44.991 EAL: No shared files mode enabled, IPC is disabled 00:13:44.991 EAL: No shared files mode enabled, IPC is disabled 00:13:44.991 EAL: TSC frequency is ~2200000 KHz 00:13:44.991 EAL: Main lcore 0 is ready (tid=7f05706cfa40;cpuset=[0]) 00:13:44.991 EAL: Trying to obtain current memory policy. 00:13:44.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:44.991 EAL: Restoring previous memory policy: 0 00:13:44.991 EAL: request: mp_malloc_sync 00:13:44.991 EAL: No shared files mode enabled, IPC is disabled 00:13:44.991 EAL: Heap on socket 0 was expanded by 2MB 00:13:44.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:44.991 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:44.991 EAL: Mem event callback 'spdk:(nil)' registered 00:13:44.991 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:13:44.991 00:13:44.991 00:13:44.991 CUnit - A unit testing framework for C - Version 2.1-3 00:13:44.991 http://cunit.sourceforge.net/ 00:13:44.991 00:13:44.991 00:13:44.991 Suite: components_suite 00:13:45.557 Test: vtophys_malloc_test ...passed 00:13:45.557 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:45.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.557 EAL: Restoring previous memory policy: 4 00:13:45.557 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.557 EAL: request: mp_malloc_sync 00:13:45.557 EAL: No shared files mode enabled, IPC is disabled 00:13:45.557 EAL: Heap on socket 0 was expanded by 4MB 00:13:45.557 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.557 EAL: request: mp_malloc_sync 00:13:45.557 EAL: No shared files mode enabled, IPC is disabled 00:13:45.557 EAL: Heap on socket 0 was shrunk by 4MB 00:13:45.557 EAL: Trying to obtain current memory policy. 00:13:45.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.557 EAL: Restoring previous memory policy: 4 00:13:45.557 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.557 EAL: request: mp_malloc_sync 00:13:45.557 EAL: No shared files mode enabled, IPC is disabled 00:13:45.557 EAL: Heap on socket 0 was expanded by 6MB 00:13:45.557 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.557 EAL: request: mp_malloc_sync 00:13:45.557 EAL: No shared files mode enabled, IPC is disabled 00:13:45.557 EAL: Heap on socket 0 was shrunk by 6MB 00:13:45.557 EAL: Trying to obtain current memory policy. 00:13:45.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.557 EAL: Restoring previous memory policy: 4 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was expanded by 10MB 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was shrunk by 10MB 00:13:45.558 EAL: Trying to obtain current memory policy. 00:13:45.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.558 EAL: Restoring previous memory policy: 4 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was expanded by 18MB 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was shrunk by 18MB 00:13:45.558 EAL: Trying to obtain current memory policy. 00:13:45.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.558 EAL: Restoring previous memory policy: 4 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was expanded by 34MB 00:13:45.558 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.558 EAL: request: mp_malloc_sync 00:13:45.558 EAL: No shared files mode enabled, IPC is disabled 00:13:45.558 EAL: Heap on socket 0 was shrunk by 34MB 00:13:45.816 EAL: Trying to obtain current memory policy. 00:13:45.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:45.816 EAL: Restoring previous memory policy: 4 00:13:45.816 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.816 EAL: request: mp_malloc_sync 00:13:45.816 EAL: No shared files mode enabled, IPC is disabled 00:13:45.816 EAL: Heap on socket 0 was expanded by 66MB 00:13:45.816 EAL: Calling mem event callback 'spdk:(nil)' 00:13:45.816 EAL: request: mp_malloc_sync 00:13:45.816 EAL: No shared files mode enabled, IPC is disabled 00:13:45.816 EAL: Heap on socket 0 was shrunk by 66MB 00:13:45.816 EAL: Trying to obtain current memory policy. 00:13:45.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:46.075 EAL: Restoring previous memory policy: 4 00:13:46.075 EAL: Calling mem event callback 'spdk:(nil)' 00:13:46.075 EAL: request: mp_malloc_sync 00:13:46.075 EAL: No shared files mode enabled, IPC is disabled 00:13:46.075 EAL: Heap on socket 0 was expanded by 130MB 00:13:46.075 EAL: Calling mem event callback 'spdk:(nil)' 00:13:46.075 EAL: request: mp_malloc_sync 00:13:46.075 EAL: No shared files mode enabled, IPC is disabled 00:13:46.075 EAL: Heap on socket 0 was shrunk by 130MB 00:13:46.333 EAL: Trying to obtain current memory policy. 00:13:46.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:46.333 EAL: Restoring previous memory policy: 4 00:13:46.333 EAL: Calling mem event callback 'spdk:(nil)' 00:13:46.333 EAL: request: mp_malloc_sync 00:13:46.333 EAL: No shared files mode enabled, IPC is disabled 00:13:46.333 EAL: Heap on socket 0 was expanded by 258MB 00:13:46.899 EAL: Calling mem event callback 'spdk:(nil)' 00:13:46.899 EAL: request: mp_malloc_sync 00:13:46.899 EAL: No shared files mode enabled, IPC is disabled 00:13:46.899 EAL: Heap on socket 0 was shrunk by 258MB 00:13:47.157 EAL: Trying to obtain current memory policy. 00:13:47.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:47.415 EAL: Restoring previous memory policy: 4 00:13:47.415 EAL: Calling mem event callback 'spdk:(nil)' 00:13:47.415 EAL: request: mp_malloc_sync 00:13:47.415 EAL: No shared files mode enabled, IPC is disabled 00:13:47.415 EAL: Heap on socket 0 was expanded by 514MB 00:13:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:13:48.348 EAL: request: mp_malloc_sync 00:13:48.348 EAL: No shared files mode enabled, IPC is disabled 00:13:48.348 EAL: Heap on socket 0 was shrunk by 514MB 00:13:49.282 EAL: Trying to obtain current memory policy. 00:13:49.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:49.282 EAL: Restoring previous memory policy: 4 00:13:49.282 EAL: Calling mem event callback 'spdk:(nil)' 00:13:49.282 EAL: request: mp_malloc_sync 00:13:49.282 EAL: No shared files mode enabled, IPC is disabled 00:13:49.282 EAL: Heap on socket 0 was expanded by 1026MB 00:13:51.248 EAL: Calling mem event callback 'spdk:(nil)' 00:13:51.248 EAL: request: mp_malloc_sync 00:13:51.248 EAL: No shared files mode enabled, IPC is disabled 00:13:51.248 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:52.620 passed 00:13:52.620 00:13:52.620 Run Summary: Type Total Ran Passed Failed Inactive 00:13:52.620 suites 1 1 n/a 0 0 00:13:52.620 tests 2 2 2 0 0 00:13:52.620 asserts 5229 5229 5229 0 n/a 00:13:52.620 00:13:52.620 Elapsed time = 7.551 seconds 00:13:52.620 EAL: Calling mem event callback 'spdk:(nil)' 00:13:52.620 EAL: request: mp_malloc_sync 00:13:52.620 EAL: No shared files mode enabled, IPC is disabled 00:13:52.620 EAL: Heap on socket 0 was shrunk by 2MB 00:13:52.620 EAL: No shared files mode enabled, IPC is disabled 00:13:52.620 EAL: No shared files mode enabled, IPC is disabled 00:13:52.620 EAL: No shared files mode enabled, IPC is disabled 00:13:52.620 00:13:52.620 real 0m7.863s 00:13:52.620 user 0m6.717s 00:13:52.620 sys 0m0.982s 00:13:52.620 11:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.620 11:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:52.620 ************************************ 00:13:52.620 END TEST env_vtophys 00:13:52.620 ************************************ 00:13:52.620 11:05:00 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:52.620 11:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:52.620 11:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.620 11:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:52.879 ************************************ 00:13:52.879 START TEST env_pci 00:13:52.879 ************************************ 00:13:52.879 11:05:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:52.879 00:13:52.879 00:13:52.879 CUnit - A unit testing framework for C - Version 2.1-3 00:13:52.879 http://cunit.sourceforge.net/ 00:13:52.879 00:13:52.879 00:13:52.879 Suite: pci 00:13:52.879 Test: pci_hook ...[2024-04-18 11:05:00.911056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60117 has claimed it 00:13:52.879 passed 00:13:52.879 00:13:52.879 EAL: Cannot find device (10000:00:01.0) 00:13:52.879 EAL: Failed to attach device on primary process 00:13:52.879 Run Summary: Type Total Ran Passed Failed Inactive 00:13:52.879 suites 1 1 n/a 0 0 00:13:52.879 tests 1 1 1 0 0 00:13:52.879 asserts 25 25 25 0 n/a 00:13:52.879 00:13:52.879 Elapsed time = 0.010 seconds 00:13:52.879 00:13:52.879 real 0m0.077s 00:13:52.879 user 0m0.030s 00:13:52.879 sys 0m0.046s 00:13:52.879 11:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.879 ************************************ 00:13:52.879 END TEST env_pci 00:13:52.879 ************************************ 00:13:52.879 11:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:52.879 11:05:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:52.879 11:05:00 -- env/env.sh@15 -- # uname 00:13:52.879 11:05:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:52.879 11:05:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:52.879 11:05:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:52.879 11:05:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:52.879 11:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.879 11:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:52.879 ************************************ 00:13:52.879 START TEST env_dpdk_post_init 00:13:52.879 ************************************ 00:13:52.879 11:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:53.138 EAL: Detected CPU lcores: 10 00:13:53.138 EAL: Detected NUMA nodes: 1 00:13:53.138 EAL: Detected shared linkage of DPDK 00:13:53.138 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:53.138 EAL: Selected IOVA mode 'PA' 00:13:53.138 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:53.138 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:13:53.138 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:13:53.138 Starting DPDK initialization... 00:13:53.138 Starting SPDK post initialization... 00:13:53.138 SPDK NVMe probe 00:13:53.138 Attaching to 0000:00:10.0 00:13:53.138 Attaching to 0000:00:11.0 00:13:53.138 Attached to 0000:00:10.0 00:13:53.138 Attached to 0000:00:11.0 00:13:53.138 Cleaning up... 00:13:53.138 00:13:53.138 real 0m0.281s 00:13:53.138 user 0m0.077s 00:13:53.138 sys 0m0.104s 00:13:53.396 11:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.396 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.396 ************************************ 00:13:53.396 END TEST env_dpdk_post_init 00:13:53.396 ************************************ 00:13:53.396 11:05:01 -- env/env.sh@26 -- # uname 00:13:53.396 11:05:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:53.396 11:05:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:53.396 11:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.396 11:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.396 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.396 ************************************ 00:13:53.396 START TEST env_mem_callbacks 00:13:53.396 ************************************ 00:13:53.396 11:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:53.396 EAL: Detected CPU lcores: 10 00:13:53.396 EAL: Detected NUMA nodes: 1 00:13:53.396 EAL: Detected shared linkage of DPDK 00:13:53.396 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:53.396 EAL: Selected IOVA mode 'PA' 00:13:53.654 00:13:53.654 00:13:53.654 CUnit - A unit testing framework for C - Version 2.1-3 00:13:53.654 http://cunit.sourceforge.net/ 00:13:53.654 00:13:53.654 00:13:53.654 Suite: memory 00:13:53.654 Test: test ... 00:13:53.655 register 0x200000200000 2097152 00:13:53.655 malloc 3145728 00:13:53.655 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:53.655 register 0x200000400000 4194304 00:13:53.655 buf 0x2000004fffc0 len 3145728 PASSED 00:13:53.655 malloc 64 00:13:53.655 buf 0x2000004ffec0 len 64 PASSED 00:13:53.655 malloc 4194304 00:13:53.655 register 0x200000800000 6291456 00:13:53.655 buf 0x2000009fffc0 len 4194304 PASSED 00:13:53.655 free 0x2000004fffc0 3145728 00:13:53.655 free 0x2000004ffec0 64 00:13:53.655 unregister 0x200000400000 4194304 PASSED 00:13:53.655 free 0x2000009fffc0 4194304 00:13:53.655 unregister 0x200000800000 6291456 PASSED 00:13:53.655 malloc 8388608 00:13:53.655 register 0x200000400000 10485760 00:13:53.655 buf 0x2000005fffc0 len 8388608 PASSED 00:13:53.655 free 0x2000005fffc0 8388608 00:13:53.655 unregister 0x200000400000 10485760 PASSED 00:13:53.655 passed 00:13:53.655 00:13:53.655 Run Summary: Type Total Ran Passed Failed Inactive 00:13:53.655 suites 1 1 n/a 0 0 00:13:53.655 tests 1 1 1 0 0 00:13:53.655 asserts 15 15 15 0 n/a 00:13:53.655 00:13:53.655 Elapsed time = 0.069 seconds 00:13:53.655 00:13:53.655 real 0m0.270s 00:13:53.655 user 0m0.099s 00:13:53.655 sys 0m0.069s 00:13:53.655 11:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.655 ************************************ 00:13:53.655 END TEST env_mem_callbacks 00:13:53.655 ************************************ 00:13:53.655 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.655 00:13:53.655 real 0m9.547s 00:13:53.655 user 0m7.506s 00:13:53.655 sys 0m1.596s 00:13:53.655 11:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.655 ************************************ 00:13:53.655 END TEST env 00:13:53.655 ************************************ 00:13:53.655 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.655 11:05:01 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:53.655 11:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.655 11:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.655 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.912 ************************************ 00:13:53.912 START TEST rpc 00:13:53.912 ************************************ 00:13:53.912 11:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:53.912 * Looking for test storage... 00:13:53.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:53.912 11:05:01 -- rpc/rpc.sh@65 -- # spdk_pid=60249 00:13:53.912 11:05:01 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:13:53.912 11:05:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:53.912 11:05:01 -- rpc/rpc.sh@67 -- # waitforlisten 60249 00:13:53.912 11:05:01 -- common/autotest_common.sh@817 -- # '[' -z 60249 ']' 00:13:53.912 11:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.912 11:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:53.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.912 11:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.912 11:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:53.912 11:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.912 [2024-04-18 11:05:02.097458] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:53.912 [2024-04-18 11:05:02.097627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:13:54.170 [2024-04-18 11:05:02.267180] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.428 [2024-04-18 11:05:02.551030] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:54.428 [2024-04-18 11:05:02.551144] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60249' to capture a snapshot of events at runtime. 00:13:54.428 [2024-04-18 11:05:02.551166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.428 [2024-04-18 11:05:02.551185] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.428 [2024-04-18 11:05:02.551200] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60249 for offline analysis/debug. 00:13:54.428 [2024-04-18 11:05:02.551242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.363 11:05:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.363 11:05:03 -- common/autotest_common.sh@850 -- # return 0 00:13:55.363 11:05:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:55.363 11:05:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:55.363 11:05:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:55.363 11:05:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:55.363 11:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:55.363 11:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.363 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.363 ************************************ 00:13:55.363 START TEST rpc_integrity 00:13:55.363 ************************************ 00:13:55.363 11:05:03 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:55.363 11:05:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:55.363 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.363 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.363 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.363 11:05:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:55.363 11:05:03 -- rpc/rpc.sh@13 -- # jq length 00:13:55.363 11:05:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:55.363 11:05:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:55.363 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.363 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.363 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.363 11:05:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:55.363 11:05:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:55.363 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.363 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.363 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.363 11:05:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:55.363 { 00:13:55.363 "aliases": [ 00:13:55.363 "8f50c6e8-823e-41a4-854e-c8e49fc76dc2" 00:13:55.363 ], 00:13:55.363 "assigned_rate_limits": { 00:13:55.363 "r_mbytes_per_sec": 0, 00:13:55.363 "rw_ios_per_sec": 0, 00:13:55.363 "rw_mbytes_per_sec": 0, 00:13:55.363 "w_mbytes_per_sec": 0 00:13:55.363 }, 00:13:55.363 "block_size": 512, 00:13:55.363 "claimed": false, 00:13:55.363 "driver_specific": {}, 00:13:55.363 "memory_domains": [ 00:13:55.363 { 00:13:55.363 "dma_device_id": "system", 00:13:55.363 "dma_device_type": 1 00:13:55.363 }, 00:13:55.363 { 00:13:55.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.363 "dma_device_type": 2 00:13:55.363 } 00:13:55.363 ], 00:13:55.363 "name": "Malloc0", 00:13:55.363 "num_blocks": 16384, 00:13:55.363 "product_name": "Malloc disk", 00:13:55.363 "supported_io_types": { 00:13:55.363 "abort": true, 00:13:55.363 "compare": false, 00:13:55.363 "compare_and_write": false, 00:13:55.363 "flush": true, 00:13:55.363 "nvme_admin": false, 00:13:55.363 "nvme_io": false, 00:13:55.363 "read": true, 00:13:55.363 "reset": true, 00:13:55.363 "unmap": true, 00:13:55.363 "write": true, 00:13:55.363 "write_zeroes": true 00:13:55.363 }, 00:13:55.363 "uuid": "8f50c6e8-823e-41a4-854e-c8e49fc76dc2", 00:13:55.363 "zoned": false 00:13:55.363 } 00:13:55.363 ]' 00:13:55.363 11:05:03 -- rpc/rpc.sh@17 -- # jq length 00:13:55.621 11:05:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:55.621 11:05:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:55.621 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 [2024-04-18 11:05:03.624784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:55.621 [2024-04-18 11:05:03.624858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.621 [2024-04-18 11:05:03.624894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:55.621 [2024-04-18 11:05:03.624913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.621 [2024-04-18 11:05:03.627678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.621 [2024-04-18 11:05:03.627729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:55.621 Passthru0 00:13:55.621 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.621 11:05:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:55.621 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.621 11:05:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:55.621 { 00:13:55.621 "aliases": [ 00:13:55.621 "8f50c6e8-823e-41a4-854e-c8e49fc76dc2" 00:13:55.621 ], 00:13:55.621 "assigned_rate_limits": { 00:13:55.621 "r_mbytes_per_sec": 0, 00:13:55.621 "rw_ios_per_sec": 0, 00:13:55.621 "rw_mbytes_per_sec": 0, 00:13:55.621 "w_mbytes_per_sec": 0 00:13:55.621 }, 00:13:55.621 "block_size": 512, 00:13:55.621 "claim_type": "exclusive_write", 00:13:55.621 "claimed": true, 00:13:55.621 "driver_specific": {}, 00:13:55.621 "memory_domains": [ 00:13:55.621 { 00:13:55.621 "dma_device_id": "system", 00:13:55.621 "dma_device_type": 1 00:13:55.621 }, 00:13:55.621 { 00:13:55.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.621 "dma_device_type": 2 00:13:55.621 } 00:13:55.621 ], 00:13:55.621 "name": "Malloc0", 00:13:55.621 "num_blocks": 16384, 00:13:55.621 "product_name": "Malloc disk", 00:13:55.621 "supported_io_types": { 00:13:55.621 "abort": true, 00:13:55.621 "compare": false, 00:13:55.621 "compare_and_write": false, 00:13:55.621 "flush": true, 00:13:55.621 "nvme_admin": false, 00:13:55.621 "nvme_io": false, 00:13:55.621 "read": true, 00:13:55.621 "reset": true, 00:13:55.621 "unmap": true, 00:13:55.621 "write": true, 00:13:55.621 "write_zeroes": true 00:13:55.621 }, 00:13:55.621 "uuid": "8f50c6e8-823e-41a4-854e-c8e49fc76dc2", 00:13:55.621 "zoned": false 00:13:55.621 }, 00:13:55.621 { 00:13:55.621 "aliases": [ 00:13:55.621 "807569b0-32f7-5a8a-9384-f35cb399afcc" 00:13:55.621 ], 00:13:55.621 "assigned_rate_limits": { 00:13:55.621 "r_mbytes_per_sec": 0, 00:13:55.621 "rw_ios_per_sec": 0, 00:13:55.621 "rw_mbytes_per_sec": 0, 00:13:55.621 "w_mbytes_per_sec": 0 00:13:55.621 }, 00:13:55.621 "block_size": 512, 00:13:55.621 "claimed": false, 00:13:55.621 "driver_specific": { 00:13:55.621 "passthru": { 00:13:55.621 "base_bdev_name": "Malloc0", 00:13:55.621 "name": "Passthru0" 00:13:55.621 } 00:13:55.621 }, 00:13:55.621 "memory_domains": [ 00:13:55.621 { 00:13:55.621 "dma_device_id": "system", 00:13:55.621 "dma_device_type": 1 00:13:55.621 }, 00:13:55.621 { 00:13:55.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.621 "dma_device_type": 2 00:13:55.621 } 00:13:55.621 ], 00:13:55.621 "name": "Passthru0", 00:13:55.621 "num_blocks": 16384, 00:13:55.621 "product_name": "passthru", 00:13:55.621 "supported_io_types": { 00:13:55.621 "abort": true, 00:13:55.621 "compare": false, 00:13:55.621 "compare_and_write": false, 00:13:55.621 "flush": true, 00:13:55.621 "nvme_admin": false, 00:13:55.621 "nvme_io": false, 00:13:55.621 "read": true, 00:13:55.621 "reset": true, 00:13:55.621 "unmap": true, 00:13:55.621 "write": true, 00:13:55.621 "write_zeroes": true 00:13:55.621 }, 00:13:55.621 "uuid": "807569b0-32f7-5a8a-9384-f35cb399afcc", 00:13:55.621 "zoned": false 00:13:55.621 } 00:13:55.621 ]' 00:13:55.621 11:05:03 -- rpc/rpc.sh@21 -- # jq length 00:13:55.621 11:05:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:55.621 11:05:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:55.621 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.621 11:05:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.621 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.621 11:05:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.621 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.621 11:05:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:55.621 11:05:03 -- rpc/rpc.sh@26 -- # jq length 00:13:55.621 11:05:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:55.621 00:13:55.621 real 0m0.331s 00:13:55.621 user 0m0.200s 00:13:55.621 sys 0m0.034s 00:13:55.621 ************************************ 00:13:55.621 END TEST rpc_integrity 00:13:55.621 ************************************ 00:13:55.621 11:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.621 11:05:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:55.621 11:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:55.621 11:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.621 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.879 ************************************ 00:13:55.879 START TEST rpc_plugins 00:13:55.879 ************************************ 00:13:55.879 11:05:03 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:13:55.879 11:05:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:55.879 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.879 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.879 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.879 11:05:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:55.879 11:05:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:55.879 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.879 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.879 11:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.879 11:05:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:55.879 { 00:13:55.879 "aliases": [ 00:13:55.879 "c46ef255-fa67-44db-87eb-12fcf625c9d7" 00:13:55.879 ], 00:13:55.879 "assigned_rate_limits": { 00:13:55.879 "r_mbytes_per_sec": 0, 00:13:55.879 "rw_ios_per_sec": 0, 00:13:55.879 "rw_mbytes_per_sec": 0, 00:13:55.879 "w_mbytes_per_sec": 0 00:13:55.879 }, 00:13:55.879 "block_size": 4096, 00:13:55.879 "claimed": false, 00:13:55.879 "driver_specific": {}, 00:13:55.879 "memory_domains": [ 00:13:55.879 { 00:13:55.879 "dma_device_id": "system", 00:13:55.879 "dma_device_type": 1 00:13:55.879 }, 00:13:55.879 { 00:13:55.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.879 "dma_device_type": 2 00:13:55.879 } 00:13:55.879 ], 00:13:55.879 "name": "Malloc1", 00:13:55.879 "num_blocks": 256, 00:13:55.879 "product_name": "Malloc disk", 00:13:55.879 "supported_io_types": { 00:13:55.879 "abort": true, 00:13:55.879 "compare": false, 00:13:55.879 "compare_and_write": false, 00:13:55.879 "flush": true, 00:13:55.879 "nvme_admin": false, 00:13:55.879 "nvme_io": false, 00:13:55.879 "read": true, 00:13:55.879 "reset": true, 00:13:55.879 "unmap": true, 00:13:55.879 "write": true, 00:13:55.879 "write_zeroes": true 00:13:55.879 }, 00:13:55.879 "uuid": "c46ef255-fa67-44db-87eb-12fcf625c9d7", 00:13:55.879 "zoned": false 00:13:55.879 } 00:13:55.879 ]' 00:13:55.879 11:05:03 -- rpc/rpc.sh@32 -- # jq length 00:13:55.879 11:05:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:55.879 11:05:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:55.879 11:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.879 11:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.879 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.879 11:05:04 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:55.879 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.879 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.879 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.879 11:05:04 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:55.879 11:05:04 -- rpc/rpc.sh@36 -- # jq length 00:13:55.879 11:05:04 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:55.879 00:13:55.879 real 0m0.161s 00:13:55.879 user 0m0.104s 00:13:55.879 sys 0m0.021s 00:13:55.879 ************************************ 00:13:55.879 END TEST rpc_plugins 00:13:55.879 ************************************ 00:13:55.879 11:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:55.879 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 11:05:04 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:56.137 11:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:56.137 11:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.137 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 ************************************ 00:13:56.137 START TEST rpc_trace_cmd_test 00:13:56.137 ************************************ 00:13:56.137 11:05:04 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:13:56.137 11:05:04 -- rpc/rpc.sh@40 -- # local info 00:13:56.137 11:05:04 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:56.137 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.137 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.137 11:05:04 -- rpc/rpc.sh@42 -- # info='{ 00:13:56.137 "bdev": { 00:13:56.137 "mask": "0x8", 00:13:56.137 "tpoint_mask": "0xffffffffffffffff" 00:13:56.137 }, 00:13:56.137 "bdev_nvme": { 00:13:56.137 "mask": "0x4000", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "blobfs": { 00:13:56.137 "mask": "0x80", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "dsa": { 00:13:56.137 "mask": "0x200", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "ftl": { 00:13:56.137 "mask": "0x40", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "iaa": { 00:13:56.137 "mask": "0x1000", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "iscsi_conn": { 00:13:56.137 "mask": "0x2", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "nvme_pcie": { 00:13:56.137 "mask": "0x800", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "nvme_tcp": { 00:13:56.137 "mask": "0x2000", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "nvmf_rdma": { 00:13:56.137 "mask": "0x10", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "nvmf_tcp": { 00:13:56.137 "mask": "0x20", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "scsi": { 00:13:56.137 "mask": "0x4", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "sock": { 00:13:56.137 "mask": "0x8000", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "thread": { 00:13:56.137 "mask": "0x400", 00:13:56.137 "tpoint_mask": "0x0" 00:13:56.137 }, 00:13:56.137 "tpoint_group_mask": "0x8", 00:13:56.137 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60249" 00:13:56.137 }' 00:13:56.137 11:05:04 -- rpc/rpc.sh@43 -- # jq length 00:13:56.137 11:05:04 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:13:56.137 11:05:04 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:56.137 11:05:04 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:56.137 11:05:04 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:56.137 11:05:04 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:56.137 11:05:04 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:56.396 11:05:04 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:56.396 11:05:04 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:56.396 11:05:04 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:56.396 00:13:56.396 real 0m0.253s 00:13:56.396 user 0m0.223s 00:13:56.396 sys 0m0.023s 00:13:56.396 ************************************ 00:13:56.396 END TEST rpc_trace_cmd_test 00:13:56.396 ************************************ 00:13:56.396 11:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:56.396 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.396 11:05:04 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:13:56.396 11:05:04 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:13:56.396 11:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:56.396 11:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.396 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.396 ************************************ 00:13:56.396 START TEST go_rpc 00:13:56.396 ************************************ 00:13:56.396 11:05:04 -- common/autotest_common.sh@1111 -- # go_rpc 00:13:56.396 11:05:04 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:56.396 11:05:04 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:13:56.396 11:05:04 -- rpc/rpc.sh@52 -- # jq length 00:13:56.396 11:05:04 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:13:56.396 11:05:04 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:13:56.396 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.396 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.653 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.653 11:05:04 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:13:56.654 11:05:04 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:56.654 11:05:04 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["46ce120b-4b3a-4c22-8423-cd4d6386338b"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"46ce120b-4b3a-4c22-8423-cd4d6386338b","zoned":false}]' 00:13:56.654 11:05:04 -- rpc/rpc.sh@57 -- # jq length 00:13:56.654 11:05:04 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:13:56.654 11:05:04 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:56.654 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.654 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.654 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.654 11:05:04 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:56.654 11:05:04 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:13:56.654 11:05:04 -- rpc/rpc.sh@61 -- # jq length 00:13:56.654 11:05:04 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:13:56.654 00:13:56.654 real 0m0.252s 00:13:56.654 user 0m0.159s 00:13:56.654 sys 0m0.033s 00:13:56.654 11:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:56.654 ************************************ 00:13:56.654 END TEST go_rpc 00:13:56.654 ************************************ 00:13:56.654 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.654 11:05:04 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:56.654 11:05:04 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:56.654 11:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:56.654 11:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.654 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 ************************************ 00:13:56.912 START TEST rpc_daemon_integrity 00:13:56.912 ************************************ 00:13:56.912 11:05:04 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:56.912 11:05:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:56.912 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.912 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.912 11:05:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:56.912 11:05:04 -- rpc/rpc.sh@13 -- # jq length 00:13:56.912 11:05:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:56.912 11:05:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:56.912 11:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.912 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 11:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.912 11:05:04 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:13:56.912 11:05:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:56.912 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.912 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.912 11:05:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:56.912 { 00:13:56.912 "aliases": [ 00:13:56.912 "c0cf46d5-846b-449c-acd3-69479dab1dbc" 00:13:56.912 ], 00:13:56.912 "assigned_rate_limits": { 00:13:56.912 "r_mbytes_per_sec": 0, 00:13:56.912 "rw_ios_per_sec": 0, 00:13:56.912 "rw_mbytes_per_sec": 0, 00:13:56.912 "w_mbytes_per_sec": 0 00:13:56.912 }, 00:13:56.912 "block_size": 512, 00:13:56.912 "claimed": false, 00:13:56.912 "driver_specific": {}, 00:13:56.912 "memory_domains": [ 00:13:56.912 { 00:13:56.912 "dma_device_id": "system", 00:13:56.912 "dma_device_type": 1 00:13:56.912 }, 00:13:56.912 { 00:13:56.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.912 "dma_device_type": 2 00:13:56.912 } 00:13:56.912 ], 00:13:56.912 "name": "Malloc3", 00:13:56.912 "num_blocks": 16384, 00:13:56.912 "product_name": "Malloc disk", 00:13:56.912 "supported_io_types": { 00:13:56.912 "abort": true, 00:13:56.912 "compare": false, 00:13:56.912 "compare_and_write": false, 00:13:56.912 "flush": true, 00:13:56.912 "nvme_admin": false, 00:13:56.912 "nvme_io": false, 00:13:56.912 "read": true, 00:13:56.912 "reset": true, 00:13:56.912 "unmap": true, 00:13:56.912 "write": true, 00:13:56.912 "write_zeroes": true 00:13:56.912 }, 00:13:56.912 "uuid": "c0cf46d5-846b-449c-acd3-69479dab1dbc", 00:13:56.912 "zoned": false 00:13:56.912 } 00:13:56.912 ]' 00:13:56.912 11:05:05 -- rpc/rpc.sh@17 -- # jq length 00:13:56.912 11:05:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:56.912 11:05:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:13:56.912 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.912 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 [2024-04-18 11:05:05.074442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:56.912 [2024-04-18 11:05:05.074522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.912 [2024-04-18 11:05:05.074562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:13:56.912 [2024-04-18 11:05:05.074580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.912 [2024-04-18 11:05:05.077411] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.912 [2024-04-18 11:05:05.077460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:56.912 Passthru0 00:13:56.912 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.912 11:05:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:56.912 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.912 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.912 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.912 11:05:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:56.912 { 00:13:56.912 "aliases": [ 00:13:56.912 "c0cf46d5-846b-449c-acd3-69479dab1dbc" 00:13:56.912 ], 00:13:56.912 "assigned_rate_limits": { 00:13:56.912 "r_mbytes_per_sec": 0, 00:13:56.912 "rw_ios_per_sec": 0, 00:13:56.912 "rw_mbytes_per_sec": 0, 00:13:56.912 "w_mbytes_per_sec": 0 00:13:56.912 }, 00:13:56.912 "block_size": 512, 00:13:56.912 "claim_type": "exclusive_write", 00:13:56.912 "claimed": true, 00:13:56.912 "driver_specific": {}, 00:13:56.912 "memory_domains": [ 00:13:56.912 { 00:13:56.912 "dma_device_id": "system", 00:13:56.912 "dma_device_type": 1 00:13:56.912 }, 00:13:56.912 { 00:13:56.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.912 "dma_device_type": 2 00:13:56.912 } 00:13:56.912 ], 00:13:56.912 "name": "Malloc3", 00:13:56.912 "num_blocks": 16384, 00:13:56.912 "product_name": "Malloc disk", 00:13:56.912 "supported_io_types": { 00:13:56.912 "abort": true, 00:13:56.912 "compare": false, 00:13:56.912 "compare_and_write": false, 00:13:56.912 "flush": true, 00:13:56.912 "nvme_admin": false, 00:13:56.912 "nvme_io": false, 00:13:56.912 "read": true, 00:13:56.912 "reset": true, 00:13:56.912 "unmap": true, 00:13:56.912 "write": true, 00:13:56.912 "write_zeroes": true 00:13:56.912 }, 00:13:56.912 "uuid": "c0cf46d5-846b-449c-acd3-69479dab1dbc", 00:13:56.912 "zoned": false 00:13:56.912 }, 00:13:56.912 { 00:13:56.912 "aliases": [ 00:13:56.912 "58864b79-4c3b-569b-8305-823471ceff71" 00:13:56.912 ], 00:13:56.912 "assigned_rate_limits": { 00:13:56.912 "r_mbytes_per_sec": 0, 00:13:56.912 "rw_ios_per_sec": 0, 00:13:56.912 "rw_mbytes_per_sec": 0, 00:13:56.912 "w_mbytes_per_sec": 0 00:13:56.912 }, 00:13:56.912 "block_size": 512, 00:13:56.912 "claimed": false, 00:13:56.912 "driver_specific": { 00:13:56.912 "passthru": { 00:13:56.912 "base_bdev_name": "Malloc3", 00:13:56.912 "name": "Passthru0" 00:13:56.912 } 00:13:56.912 }, 00:13:56.912 "memory_domains": [ 00:13:56.912 { 00:13:56.912 "dma_device_id": "system", 00:13:56.912 "dma_device_type": 1 00:13:56.912 }, 00:13:56.912 { 00:13:56.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.912 "dma_device_type": 2 00:13:56.912 } 00:13:56.912 ], 00:13:56.912 "name": "Passthru0", 00:13:56.912 "num_blocks": 16384, 00:13:56.912 "product_name": "passthru", 00:13:56.912 "supported_io_types": { 00:13:56.912 "abort": true, 00:13:56.912 "compare": false, 00:13:56.912 "compare_and_write": false, 00:13:56.912 "flush": true, 00:13:56.912 "nvme_admin": false, 00:13:56.912 "nvme_io": false, 00:13:56.912 "read": true, 00:13:56.912 "reset": true, 00:13:56.912 "unmap": true, 00:13:56.912 "write": true, 00:13:56.912 "write_zeroes": true 00:13:56.912 }, 00:13:56.912 "uuid": "58864b79-4c3b-569b-8305-823471ceff71", 00:13:56.912 "zoned": false 00:13:56.912 } 00:13:56.912 ]' 00:13:56.912 11:05:05 -- rpc/rpc.sh@21 -- # jq length 00:13:57.170 11:05:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:57.170 11:05:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:57.170 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.170 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.170 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.170 11:05:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:57.170 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.170 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.170 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.170 11:05:05 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:57.170 11:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.170 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.170 11:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.170 11:05:05 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:57.170 11:05:05 -- rpc/rpc.sh@26 -- # jq length 00:13:57.170 11:05:05 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:57.170 00:13:57.170 real 0m0.338s 00:13:57.170 user 0m0.212s 00:13:57.170 sys 0m0.032s 00:13:57.170 ************************************ 00:13:57.170 END TEST rpc_daemon_integrity 00:13:57.170 ************************************ 00:13:57.170 11:05:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.170 11:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.170 11:05:05 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:57.170 11:05:05 -- rpc/rpc.sh@84 -- # killprocess 60249 00:13:57.170 11:05:05 -- common/autotest_common.sh@936 -- # '[' -z 60249 ']' 00:13:57.170 11:05:05 -- common/autotest_common.sh@940 -- # kill -0 60249 00:13:57.170 11:05:05 -- common/autotest_common.sh@941 -- # uname 00:13:57.170 11:05:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:57.170 11:05:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60249 00:13:57.170 11:05:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:57.170 killing process with pid 60249 00:13:57.170 11:05:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:57.170 11:05:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60249' 00:13:57.170 11:05:05 -- common/autotest_common.sh@955 -- # kill 60249 00:13:57.170 11:05:05 -- common/autotest_common.sh@960 -- # wait 60249 00:13:59.696 00:13:59.697 real 0m5.616s 00:13:59.697 user 0m6.606s 00:13:59.697 sys 0m0.998s 00:13:59.697 11:05:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.697 ************************************ 00:13:59.697 END TEST rpc 00:13:59.697 ************************************ 00:13:59.697 11:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.697 11:05:07 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:59.697 11:05:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.697 11:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.697 11:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.697 ************************************ 00:13:59.697 START TEST skip_rpc 00:13:59.697 ************************************ 00:13:59.697 11:05:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:59.697 * Looking for test storage... 00:13:59.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:59.697 11:05:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.697 11:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.697 11:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:59.697 ************************************ 00:13:59.697 START TEST skip_rpc 00:13:59.697 ************************************ 00:13:59.697 11:05:07 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60564 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:59.697 11:05:07 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:59.697 [2024-04-18 11:05:07.905462] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:59.697 [2024-04-18 11:05:07.905657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60564 ] 00:13:59.956 [2024-04-18 11:05:08.077644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.213 [2024-04-18 11:05:08.336235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.476 11:05:12 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:05.476 11:05:12 -- common/autotest_common.sh@638 -- # local es=0 00:14:05.476 11:05:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:05.476 11:05:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:05.476 11:05:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:05.476 11:05:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:05.476 11:05:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:05.476 11:05:12 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:14:05.476 11:05:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.476 11:05:12 -- common/autotest_common.sh@10 -- # set +x 00:14:05.476 2024/04/18 11:05:12 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:14:05.476 11:05:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:05.476 11:05:12 -- common/autotest_common.sh@641 -- # es=1 00:14:05.476 11:05:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:05.476 11:05:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:05.476 11:05:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:05.476 11:05:12 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:05.476 11:05:12 -- rpc/skip_rpc.sh@23 -- # killprocess 60564 00:14:05.476 11:05:12 -- common/autotest_common.sh@936 -- # '[' -z 60564 ']' 00:14:05.476 11:05:12 -- common/autotest_common.sh@940 -- # kill -0 60564 00:14:05.476 11:05:12 -- common/autotest_common.sh@941 -- # uname 00:14:05.476 11:05:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.476 11:05:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60564 00:14:05.476 11:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.476 11:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.476 killing process with pid 60564 00:14:05.476 11:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60564' 00:14:05.476 11:05:12 -- common/autotest_common.sh@955 -- # kill 60564 00:14:05.476 11:05:12 -- common/autotest_common.sh@960 -- # wait 60564 00:14:06.852 00:14:06.852 real 0m7.265s 00:14:06.852 user 0m6.706s 00:14:06.852 sys 0m0.442s 00:14:06.852 ************************************ 00:14:06.852 END TEST skip_rpc 00:14:06.852 ************************************ 00:14:06.852 11:05:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.852 11:05:15 -- common/autotest_common.sh@10 -- # set +x 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:07.111 11:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:07.111 11:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.111 11:05:15 -- common/autotest_common.sh@10 -- # set +x 00:14:07.111 ************************************ 00:14:07.111 START TEST skip_rpc_with_json 00:14:07.111 ************************************ 00:14:07.111 11:05:15 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60689 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:07.111 11:05:15 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60689 00:14:07.111 11:05:15 -- common/autotest_common.sh@817 -- # '[' -z 60689 ']' 00:14:07.111 11:05:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.111 11:05:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:07.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.111 11:05:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.111 11:05:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:07.111 11:05:15 -- common/autotest_common.sh@10 -- # set +x 00:14:07.111 [2024-04-18 11:05:15.275418] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:07.111 [2024-04-18 11:05:15.275568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:14:07.369 [2024-04-18 11:05:15.438801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.627 [2024-04-18 11:05:15.679231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.562 11:05:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:08.562 11:05:16 -- common/autotest_common.sh@850 -- # return 0 00:14:08.562 11:05:16 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:08.562 11:05:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.562 11:05:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.562 [2024-04-18 11:05:16.490935] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:08.562 2024/04/18 11:05:16 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:14:08.562 request: 00:14:08.562 { 00:14:08.562 "method": "nvmf_get_transports", 00:14:08.562 "params": { 00:14:08.562 "trtype": "tcp" 00:14:08.562 } 00:14:08.562 } 00:14:08.562 Got JSON-RPC error response 00:14:08.562 GoRPCClient: error on JSON-RPC call 00:14:08.562 11:05:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:08.562 11:05:16 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:08.562 11:05:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.562 11:05:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.562 [2024-04-18 11:05:16.499021] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.562 11:05:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.563 11:05:16 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:08.563 11:05:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.563 11:05:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.563 11:05:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.563 11:05:16 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:08.563 { 00:14:08.563 "subsystems": [ 00:14:08.563 { 00:14:08.563 "subsystem": "keyring", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "iobuf", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "iobuf_set_options", 00:14:08.563 "params": { 00:14:08.563 "large_bufsize": 135168, 00:14:08.563 "large_pool_count": 1024, 00:14:08.563 "small_bufsize": 8192, 00:14:08.563 "small_pool_count": 8192 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "sock", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "sock_impl_set_options", 00:14:08.563 "params": { 00:14:08.563 "enable_ktls": false, 00:14:08.563 "enable_placement_id": 0, 00:14:08.563 "enable_quickack": false, 00:14:08.563 "enable_recv_pipe": true, 00:14:08.563 "enable_zerocopy_send_client": false, 00:14:08.563 "enable_zerocopy_send_server": true, 00:14:08.563 "impl_name": "posix", 00:14:08.563 "recv_buf_size": 2097152, 00:14:08.563 "send_buf_size": 2097152, 00:14:08.563 "tls_version": 0, 00:14:08.563 "zerocopy_threshold": 0 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "sock_impl_set_options", 00:14:08.563 "params": { 00:14:08.563 "enable_ktls": false, 00:14:08.563 "enable_placement_id": 0, 00:14:08.563 "enable_quickack": false, 00:14:08.563 "enable_recv_pipe": true, 00:14:08.563 "enable_zerocopy_send_client": false, 00:14:08.563 "enable_zerocopy_send_server": true, 00:14:08.563 "impl_name": "ssl", 00:14:08.563 "recv_buf_size": 4096, 00:14:08.563 "send_buf_size": 4096, 00:14:08.563 "tls_version": 0, 00:14:08.563 "zerocopy_threshold": 0 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "vmd", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "accel", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "accel_set_options", 00:14:08.563 "params": { 00:14:08.563 "buf_count": 2048, 00:14:08.563 "large_cache_size": 16, 00:14:08.563 "sequence_count": 2048, 00:14:08.563 "small_cache_size": 128, 00:14:08.563 "task_count": 2048 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "bdev", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "bdev_set_options", 00:14:08.563 "params": { 00:14:08.563 "bdev_auto_examine": true, 00:14:08.563 "bdev_io_cache_size": 256, 00:14:08.563 "bdev_io_pool_size": 65535, 00:14:08.563 "iobuf_large_cache_size": 16, 00:14:08.563 "iobuf_small_cache_size": 128 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "bdev_raid_set_options", 00:14:08.563 "params": { 00:14:08.563 "process_window_size_kb": 1024 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "bdev_iscsi_set_options", 00:14:08.563 "params": { 00:14:08.563 "timeout_sec": 30 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "bdev_nvme_set_options", 00:14:08.563 "params": { 00:14:08.563 "action_on_timeout": "none", 00:14:08.563 "allow_accel_sequence": false, 00:14:08.563 "arbitration_burst": 0, 00:14:08.563 "bdev_retry_count": 3, 00:14:08.563 "ctrlr_loss_timeout_sec": 0, 00:14:08.563 "delay_cmd_submit": true, 00:14:08.563 "dhchap_dhgroups": [ 00:14:08.563 "null", 00:14:08.563 "ffdhe2048", 00:14:08.563 "ffdhe3072", 00:14:08.563 "ffdhe4096", 00:14:08.563 "ffdhe6144", 00:14:08.563 "ffdhe8192" 00:14:08.563 ], 00:14:08.563 "dhchap_digests": [ 00:14:08.563 "sha256", 00:14:08.563 "sha384", 00:14:08.563 "sha512" 00:14:08.563 ], 00:14:08.563 "disable_auto_failback": false, 00:14:08.563 "fast_io_fail_timeout_sec": 0, 00:14:08.563 "generate_uuids": false, 00:14:08.563 "high_priority_weight": 0, 00:14:08.563 "io_path_stat": false, 00:14:08.563 "io_queue_requests": 0, 00:14:08.563 "keep_alive_timeout_ms": 10000, 00:14:08.563 "low_priority_weight": 0, 00:14:08.563 "medium_priority_weight": 0, 00:14:08.563 "nvme_adminq_poll_period_us": 10000, 00:14:08.563 "nvme_error_stat": false, 00:14:08.563 "nvme_ioq_poll_period_us": 0, 00:14:08.563 "rdma_cm_event_timeout_ms": 0, 00:14:08.563 "rdma_max_cq_size": 0, 00:14:08.563 "rdma_srq_size": 0, 00:14:08.563 "reconnect_delay_sec": 0, 00:14:08.563 "timeout_admin_us": 0, 00:14:08.563 "timeout_us": 0, 00:14:08.563 "transport_ack_timeout": 0, 00:14:08.563 "transport_retry_count": 4, 00:14:08.563 "transport_tos": 0 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "bdev_nvme_set_hotplug", 00:14:08.563 "params": { 00:14:08.563 "enable": false, 00:14:08.563 "period_us": 100000 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "bdev_wait_for_examine" 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "scsi", 00:14:08.563 "config": null 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "scheduler", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "framework_set_scheduler", 00:14:08.563 "params": { 00:14:08.563 "name": "static" 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "vhost_scsi", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "vhost_blk", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "ublk", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "nbd", 00:14:08.563 "config": [] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "nvmf", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "nvmf_set_config", 00:14:08.563 "params": { 00:14:08.563 "admin_cmd_passthru": { 00:14:08.563 "identify_ctrlr": false 00:14:08.563 }, 00:14:08.563 "discovery_filter": "match_any" 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "nvmf_set_max_subsystems", 00:14:08.563 "params": { 00:14:08.563 "max_subsystems": 1024 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "nvmf_set_crdt", 00:14:08.563 "params": { 00:14:08.563 "crdt1": 0, 00:14:08.563 "crdt2": 0, 00:14:08.563 "crdt3": 0 00:14:08.563 } 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "method": "nvmf_create_transport", 00:14:08.563 "params": { 00:14:08.563 "abort_timeout_sec": 1, 00:14:08.563 "ack_timeout": 0, 00:14:08.563 "buf_cache_size": 4294967295, 00:14:08.563 "c2h_success": true, 00:14:08.563 "dif_insert_or_strip": false, 00:14:08.563 "in_capsule_data_size": 4096, 00:14:08.563 "io_unit_size": 131072, 00:14:08.563 "max_aq_depth": 128, 00:14:08.563 "max_io_qpairs_per_ctrlr": 127, 00:14:08.563 "max_io_size": 131072, 00:14:08.563 "max_queue_depth": 128, 00:14:08.563 "num_shared_buffers": 511, 00:14:08.563 "sock_priority": 0, 00:14:08.563 "trtype": "TCP", 00:14:08.563 "zcopy": false 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 }, 00:14:08.563 { 00:14:08.563 "subsystem": "iscsi", 00:14:08.563 "config": [ 00:14:08.563 { 00:14:08.563 "method": "iscsi_set_options", 00:14:08.563 "params": { 00:14:08.563 "allow_duplicated_isid": false, 00:14:08.563 "chap_group": 0, 00:14:08.563 "data_out_pool_size": 2048, 00:14:08.563 "default_time2retain": 20, 00:14:08.563 "default_time2wait": 2, 00:14:08.563 "disable_chap": false, 00:14:08.563 "error_recovery_level": 0, 00:14:08.563 "first_burst_length": 8192, 00:14:08.563 "immediate_data": true, 00:14:08.563 "immediate_data_pool_size": 16384, 00:14:08.563 "max_connections_per_session": 2, 00:14:08.563 "max_large_datain_per_connection": 64, 00:14:08.563 "max_queue_depth": 64, 00:14:08.563 "max_r2t_per_connection": 4, 00:14:08.563 "max_sessions": 128, 00:14:08.563 "mutual_chap": false, 00:14:08.563 "node_base": "iqn.2016-06.io.spdk", 00:14:08.563 "nop_in_interval": 30, 00:14:08.563 "nop_timeout": 60, 00:14:08.563 "pdu_pool_size": 36864, 00:14:08.563 "require_chap": false 00:14:08.563 } 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 } 00:14:08.563 ] 00:14:08.563 } 00:14:08.563 11:05:16 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:08.563 11:05:16 -- rpc/skip_rpc.sh@40 -- # killprocess 60689 00:14:08.563 11:05:16 -- common/autotest_common.sh@936 -- # '[' -z 60689 ']' 00:14:08.563 11:05:16 -- common/autotest_common.sh@940 -- # kill -0 60689 00:14:08.563 11:05:16 -- common/autotest_common.sh@941 -- # uname 00:14:08.563 11:05:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:08.563 11:05:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60689 00:14:08.563 11:05:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:08.563 killing process with pid 60689 00:14:08.563 11:05:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:08.564 11:05:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60689' 00:14:08.564 11:05:16 -- common/autotest_common.sh@955 -- # kill 60689 00:14:08.564 11:05:16 -- common/autotest_common.sh@960 -- # wait 60689 00:14:11.110 11:05:18 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60747 00:14:11.110 11:05:18 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:11.110 11:05:18 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:14:16.375 11:05:23 -- rpc/skip_rpc.sh@50 -- # killprocess 60747 00:14:16.375 11:05:23 -- common/autotest_common.sh@936 -- # '[' -z 60747 ']' 00:14:16.375 11:05:23 -- common/autotest_common.sh@940 -- # kill -0 60747 00:14:16.375 11:05:23 -- common/autotest_common.sh@941 -- # uname 00:14:16.375 11:05:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.375 11:05:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60747 00:14:16.375 11:05:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:16.375 11:05:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:16.375 killing process with pid 60747 00:14:16.375 11:05:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60747' 00:14:16.375 11:05:23 -- common/autotest_common.sh@955 -- # kill 60747 00:14:16.375 11:05:23 -- common/autotest_common.sh@960 -- # wait 60747 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:18.277 00:14:18.277 real 0m10.979s 00:14:18.277 user 0m10.332s 00:14:18.277 sys 0m0.965s 00:14:18.277 11:05:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:18.277 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 ************************************ 00:14:18.277 END TEST skip_rpc_with_json 00:14:18.277 ************************************ 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:14:18.277 11:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.277 11:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.277 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 ************************************ 00:14:18.277 START TEST skip_rpc_with_delay 00:14:18.277 ************************************ 00:14:18.277 11:05:26 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:18.277 11:05:26 -- common/autotest_common.sh@638 -- # local es=0 00:14:18.277 11:05:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:18.277 11:05:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.277 11:05:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.277 11:05:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.277 11:05:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.277 11:05:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.277 11:05:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.277 11:05:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.277 11:05:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:18.277 11:05:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:18.277 [2024-04-18 11:05:26.384497] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:14:18.277 [2024-04-18 11:05:26.384696] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:14:18.277 11:05:26 -- common/autotest_common.sh@641 -- # es=1 00:14:18.277 11:05:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:18.277 11:05:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:18.277 11:05:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:18.277 00:14:18.277 real 0m0.190s 00:14:18.277 user 0m0.102s 00:14:18.277 sys 0m0.085s 00:14:18.277 11:05:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:18.277 ************************************ 00:14:18.277 END TEST skip_rpc_with_delay 00:14:18.277 ************************************ 00:14:18.277 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@77 -- # uname 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:14:18.277 11:05:26 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:14:18.277 11:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.277 11:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.277 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.535 ************************************ 00:14:18.535 START TEST exit_on_failed_rpc_init 00:14:18.535 ************************************ 00:14:18.535 11:05:26 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:14:18.535 11:05:26 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60889 00:14:18.535 11:05:26 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60889 00:14:18.535 11:05:26 -- common/autotest_common.sh@817 -- # '[' -z 60889 ']' 00:14:18.535 11:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.535 11:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.535 11:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.535 11:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.535 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.535 11:05:26 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:18.535 [2024-04-18 11:05:26.688078] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:18.535 [2024-04-18 11:05:26.688236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60889 ] 00:14:18.794 [2024-04-18 11:05:26.853925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.052 [2024-04-18 11:05:27.136824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.022 11:05:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.022 11:05:27 -- common/autotest_common.sh@850 -- # return 0 00:14:20.022 11:05:27 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:20.022 11:05:27 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:20.022 11:05:27 -- common/autotest_common.sh@638 -- # local es=0 00:14:20.022 11:05:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:20.022 11:05:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.022 11:05:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.022 11:05:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.022 11:05:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.022 11:05:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.022 11:05:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.022 11:05:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.022 11:05:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:20.022 11:05:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:20.022 [2024-04-18 11:05:28.065711] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:20.022 [2024-04-18 11:05:28.065878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:14:20.022 [2024-04-18 11:05:28.239590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.589 [2024-04-18 11:05:28.519337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.589 [2024-04-18 11:05:28.519461] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:14:20.589 [2024-04-18 11:05:28.519484] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:14:20.589 [2024-04-18 11:05:28.519501] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:20.847 11:05:28 -- common/autotest_common.sh@641 -- # es=234 00:14:20.848 11:05:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:20.848 11:05:28 -- common/autotest_common.sh@650 -- # es=106 00:14:20.848 11:05:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:20.848 11:05:28 -- common/autotest_common.sh@658 -- # es=1 00:14:20.848 11:05:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:20.848 11:05:28 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:20.848 11:05:28 -- rpc/skip_rpc.sh@70 -- # killprocess 60889 00:14:20.848 11:05:28 -- common/autotest_common.sh@936 -- # '[' -z 60889 ']' 00:14:20.848 11:05:28 -- common/autotest_common.sh@940 -- # kill -0 60889 00:14:20.848 11:05:28 -- common/autotest_common.sh@941 -- # uname 00:14:20.848 11:05:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.848 11:05:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60889 00:14:20.848 11:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.848 11:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.848 11:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60889' 00:14:20.848 killing process with pid 60889 00:14:20.848 11:05:28 -- common/autotest_common.sh@955 -- # kill 60889 00:14:20.848 11:05:28 -- common/autotest_common.sh@960 -- # wait 60889 00:14:23.377 00:14:23.377 real 0m4.565s 00:14:23.377 user 0m5.222s 00:14:23.377 sys 0m0.669s 00:14:23.377 11:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.377 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.377 ************************************ 00:14:23.377 END TEST exit_on_failed_rpc_init 00:14:23.377 ************************************ 00:14:23.377 11:05:31 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:23.377 00:14:23.377 real 0m23.550s 00:14:23.377 user 0m22.562s 00:14:23.377 sys 0m2.452s 00:14:23.377 11:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.377 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.377 ************************************ 00:14:23.377 END TEST skip_rpc 00:14:23.377 ************************************ 00:14:23.377 11:05:31 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:23.377 11:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.377 11:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.377 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.377 ************************************ 00:14:23.377 START TEST rpc_client 00:14:23.377 ************************************ 00:14:23.377 11:05:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:23.377 * Looking for test storage... 00:14:23.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:14:23.377 11:05:31 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:14:23.377 OK 00:14:23.377 11:05:31 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:14:23.377 00:14:23.377 real 0m0.148s 00:14:23.377 user 0m0.064s 00:14:23.377 sys 0m0.091s 00:14:23.377 11:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.377 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.377 ************************************ 00:14:23.377 END TEST rpc_client 00:14:23.377 ************************************ 00:14:23.377 11:05:31 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:23.377 11:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.377 11:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.377 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.377 ************************************ 00:14:23.377 START TEST json_config 00:14:23.377 ************************************ 00:14:23.377 11:05:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:23.377 11:05:31 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.377 11:05:31 -- nvmf/common.sh@7 -- # uname -s 00:14:23.377 11:05:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.377 11:05:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.377 11:05:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.377 11:05:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.377 11:05:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.377 11:05:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.377 11:05:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.377 11:05:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.378 11:05:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.378 11:05:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.649 11:05:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:14:23.649 11:05:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:14:23.649 11:05:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.649 11:05:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.649 11:05:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:23.649 11:05:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.649 11:05:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.649 11:05:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.649 11:05:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.649 11:05:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.649 11:05:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.649 11:05:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.649 11:05:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.649 11:05:31 -- paths/export.sh@5 -- # export PATH 00:14:23.649 11:05:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.649 11:05:31 -- nvmf/common.sh@47 -- # : 0 00:14:23.649 11:05:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.649 11:05:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.649 11:05:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.649 11:05:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.649 11:05:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.649 11:05:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.649 11:05:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.649 11:05:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.649 11:05:31 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:23.649 11:05:31 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:14:23.649 11:05:31 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:14:23.649 11:05:31 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:14:23.649 11:05:31 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:14:23.649 11:05:31 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:14:23.649 11:05:31 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:14:23.649 11:05:31 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:14:23.649 11:05:31 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:14:23.649 11:05:31 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:14:23.649 11:05:31 -- json_config/json_config.sh@33 -- # declare -A app_params 00:14:23.649 11:05:31 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:14:23.649 11:05:31 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:14:23.649 11:05:31 -- json_config/json_config.sh@40 -- # last_event_id=0 00:14:23.649 11:05:31 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:23.649 INFO: JSON configuration test init 00:14:23.650 11:05:31 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:14:23.650 11:05:31 -- json_config/json_config.sh@357 -- # json_config_test_init 00:14:23.650 11:05:31 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:14:23.650 11:05:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:23.650 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.650 11:05:31 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:14:23.650 11:05:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:23.650 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.650 11:05:31 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:14:23.650 11:05:31 -- json_config/common.sh@9 -- # local app=target 00:14:23.650 11:05:31 -- json_config/common.sh@10 -- # shift 00:14:23.650 11:05:31 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:23.650 11:05:31 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:23.650 11:05:31 -- json_config/common.sh@15 -- # local app_extra_params= 00:14:23.650 11:05:31 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:23.650 11:05:31 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:23.650 11:05:31 -- json_config/common.sh@22 -- # app_pid["$app"]=61083 00:14:23.650 11:05:31 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:14:23.650 11:05:31 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:23.650 Waiting for target to run... 00:14:23.650 11:05:31 -- json_config/common.sh@25 -- # waitforlisten 61083 /var/tmp/spdk_tgt.sock 00:14:23.650 11:05:31 -- common/autotest_common.sh@817 -- # '[' -z 61083 ']' 00:14:23.650 11:05:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:23.650 11:05:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:23.650 11:05:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:23.650 11:05:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.650 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:14:23.650 [2024-04-18 11:05:31.720872] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:23.650 [2024-04-18 11:05:31.721012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61083 ] 00:14:24.226 [2024-04-18 11:05:32.175032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.226 [2024-04-18 11:05:32.381639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.484 11:05:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.484 00:14:24.484 11:05:32 -- common/autotest_common.sh@850 -- # return 0 00:14:24.484 11:05:32 -- json_config/common.sh@26 -- # echo '' 00:14:24.484 11:05:32 -- json_config/json_config.sh@269 -- # create_accel_config 00:14:24.484 11:05:32 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:14:24.485 11:05:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.485 11:05:32 -- common/autotest_common.sh@10 -- # set +x 00:14:24.485 11:05:32 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:14:24.485 11:05:32 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:14:24.485 11:05:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:24.485 11:05:32 -- common/autotest_common.sh@10 -- # set +x 00:14:24.485 11:05:32 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:14:24.485 11:05:32 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:14:24.485 11:05:32 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:14:25.419 11:05:33 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:14:25.419 11:05:33 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:14:25.419 11:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.419 11:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:25.419 11:05:33 -- json_config/json_config.sh@45 -- # local ret=0 00:14:25.419 11:05:33 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:14:25.419 11:05:33 -- json_config/json_config.sh@46 -- # local enabled_types 00:14:25.419 11:05:33 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:14:25.419 11:05:33 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:14:25.419 11:05:33 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:14:25.985 11:05:33 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:14:25.985 11:05:33 -- json_config/json_config.sh@48 -- # local get_types 00:14:25.985 11:05:33 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:14:25.985 11:05:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:25.985 11:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:25.985 11:05:33 -- json_config/json_config.sh@55 -- # return 0 00:14:25.985 11:05:33 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:14:25.985 11:05:33 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:14:25.985 11:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.985 11:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:25.985 11:05:33 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:14:25.985 11:05:33 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:14:25.985 11:05:33 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:25.985 11:05:33 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:26.243 MallocForNvmf0 00:14:26.243 11:05:34 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:26.243 11:05:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:26.501 MallocForNvmf1 00:14:26.501 11:05:34 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:14:26.501 11:05:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:14:26.758 [2024-04-18 11:05:34.744926] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.758 11:05:34 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.758 11:05:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:27.016 11:05:35 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:27.016 11:05:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:27.274 11:05:35 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:27.274 11:05:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:27.531 11:05:35 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:27.531 11:05:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:27.789 [2024-04-18 11:05:35.765635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:27.789 11:05:35 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:14:27.789 11:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.789 11:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:27.789 11:05:35 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:14:27.789 11:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.789 11:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:27.789 11:05:35 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:14:27.789 11:05:35 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:27.789 11:05:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:28.046 MallocBdevForConfigChangeCheck 00:14:28.046 11:05:36 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:14:28.046 11:05:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:28.046 11:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.046 11:05:36 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:14:28.046 11:05:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:28.305 11:05:36 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:14:28.305 INFO: shutting down applications... 00:14:28.305 11:05:36 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:14:28.305 11:05:36 -- json_config/json_config.sh@368 -- # json_config_clear target 00:14:28.305 11:05:36 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:14:28.305 11:05:36 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:14:28.871 Calling clear_iscsi_subsystem 00:14:28.871 Calling clear_nvmf_subsystem 00:14:28.871 Calling clear_nbd_subsystem 00:14:28.871 Calling clear_ublk_subsystem 00:14:28.871 Calling clear_vhost_blk_subsystem 00:14:28.871 Calling clear_vhost_scsi_subsystem 00:14:28.871 Calling clear_bdev_subsystem 00:14:28.871 11:05:36 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:14:28.871 11:05:36 -- json_config/json_config.sh@343 -- # count=100 00:14:28.871 11:05:36 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:14:28.871 11:05:36 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:28.871 11:05:36 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:14:28.871 11:05:36 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:14:29.129 11:05:37 -- json_config/json_config.sh@345 -- # break 00:14:29.129 11:05:37 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:14:29.129 11:05:37 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:14:29.129 11:05:37 -- json_config/common.sh@31 -- # local app=target 00:14:29.129 11:05:37 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:29.129 11:05:37 -- json_config/common.sh@35 -- # [[ -n 61083 ]] 00:14:29.129 11:05:37 -- json_config/common.sh@38 -- # kill -SIGINT 61083 00:14:29.129 11:05:37 -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:29.129 11:05:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:29.129 11:05:37 -- json_config/common.sh@41 -- # kill -0 61083 00:14:29.129 11:05:37 -- json_config/common.sh@45 -- # sleep 0.5 00:14:29.699 11:05:37 -- json_config/common.sh@40 -- # (( i++ )) 00:14:29.699 11:05:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:29.699 11:05:37 -- json_config/common.sh@41 -- # kill -0 61083 00:14:29.699 11:05:37 -- json_config/common.sh@45 -- # sleep 0.5 00:14:30.264 11:05:38 -- json_config/common.sh@40 -- # (( i++ )) 00:14:30.264 11:05:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:30.264 11:05:38 -- json_config/common.sh@41 -- # kill -0 61083 00:14:30.264 11:05:38 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:30.264 11:05:38 -- json_config/common.sh@43 -- # break 00:14:30.264 11:05:38 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:30.264 SPDK target shutdown done 00:14:30.264 11:05:38 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:30.264 INFO: relaunching applications... 00:14:30.264 11:05:38 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:14:30.264 11:05:38 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:30.264 11:05:38 -- json_config/common.sh@9 -- # local app=target 00:14:30.264 11:05:38 -- json_config/common.sh@10 -- # shift 00:14:30.264 11:05:38 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:30.264 11:05:38 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:30.264 11:05:38 -- json_config/common.sh@15 -- # local app_extra_params= 00:14:30.264 11:05:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:30.264 11:05:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:30.264 11:05:38 -- json_config/common.sh@22 -- # app_pid["$app"]=61370 00:14:30.264 11:05:38 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:30.264 11:05:38 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:30.264 Waiting for target to run... 00:14:30.265 11:05:38 -- json_config/common.sh@25 -- # waitforlisten 61370 /var/tmp/spdk_tgt.sock 00:14:30.265 11:05:38 -- common/autotest_common.sh@817 -- # '[' -z 61370 ']' 00:14:30.265 11:05:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:30.265 11:05:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:30.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:30.265 11:05:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:30.265 11:05:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:30.265 11:05:38 -- common/autotest_common.sh@10 -- # set +x 00:14:30.265 [2024-04-18 11:05:38.324316] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:30.265 [2024-04-18 11:05:38.324513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61370 ] 00:14:30.830 [2024-04-18 11:05:38.771028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.830 [2024-04-18 11:05:38.979831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.762 [2024-04-18 11:05:39.895051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.762 [2024-04-18 11:05:39.927143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:31.762 11:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.762 11:05:39 -- common/autotest_common.sh@850 -- # return 0 00:14:31.762 00:14:31.762 11:05:39 -- json_config/common.sh@26 -- # echo '' 00:14:31.762 11:05:39 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:14:31.762 INFO: Checking if target configuration is the same... 00:14:31.762 11:05:39 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:14:31.762 11:05:39 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:31.762 11:05:39 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:14:31.762 11:05:39 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:31.762 + '[' 2 -ne 2 ']' 00:14:31.762 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:31.762 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:31.762 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:31.762 +++ basename /dev/fd/62 00:14:31.762 ++ mktemp /tmp/62.XXX 00:14:32.019 + tmp_file_1=/tmp/62.VEC 00:14:32.019 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:32.019 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:32.019 + tmp_file_2=/tmp/spdk_tgt_config.json.HcT 00:14:32.019 + ret=0 00:14:32.019 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:32.277 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:32.277 + diff -u /tmp/62.VEC /tmp/spdk_tgt_config.json.HcT 00:14:32.277 + echo 'INFO: JSON config files are the same' 00:14:32.277 INFO: JSON config files are the same 00:14:32.277 + rm /tmp/62.VEC /tmp/spdk_tgt_config.json.HcT 00:14:32.277 + exit 0 00:14:32.277 11:05:40 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:14:32.277 INFO: changing configuration and checking if this can be detected... 00:14:32.277 11:05:40 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:14:32.277 11:05:40 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:32.277 11:05:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:32.535 11:05:40 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:14:32.535 11:05:40 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:32.535 11:05:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:32.535 + '[' 2 -ne 2 ']' 00:14:32.535 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:32.535 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:32.535 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:32.535 +++ basename /dev/fd/62 00:14:32.535 ++ mktemp /tmp/62.XXX 00:14:32.535 + tmp_file_1=/tmp/62.ele 00:14:32.535 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:32.535 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:32.535 + tmp_file_2=/tmp/spdk_tgt_config.json.2Mi 00:14:32.535 + ret=0 00:14:32.535 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:33.101 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:33.101 + diff -u /tmp/62.ele /tmp/spdk_tgt_config.json.2Mi 00:14:33.101 + ret=1 00:14:33.101 + echo '=== Start of file: /tmp/62.ele ===' 00:14:33.101 + cat /tmp/62.ele 00:14:33.101 + echo '=== End of file: /tmp/62.ele ===' 00:14:33.101 + echo '' 00:14:33.101 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2Mi ===' 00:14:33.101 + cat /tmp/spdk_tgt_config.json.2Mi 00:14:33.101 + echo '=== End of file: /tmp/spdk_tgt_config.json.2Mi ===' 00:14:33.101 + echo '' 00:14:33.101 + rm /tmp/62.ele /tmp/spdk_tgt_config.json.2Mi 00:14:33.101 + exit 1 00:14:33.101 INFO: configuration change detected. 00:14:33.101 11:05:41 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:14:33.101 11:05:41 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:14:33.101 11:05:41 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:14:33.101 11:05:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:33.101 11:05:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.101 11:05:41 -- json_config/json_config.sh@307 -- # local ret=0 00:14:33.101 11:05:41 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:14:33.101 11:05:41 -- json_config/json_config.sh@317 -- # [[ -n 61370 ]] 00:14:33.101 11:05:41 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:14:33.101 11:05:41 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:14:33.101 11:05:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:33.101 11:05:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.101 11:05:41 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:14:33.101 11:05:41 -- json_config/json_config.sh@193 -- # uname -s 00:14:33.101 11:05:41 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:14:33.101 11:05:41 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:14:33.101 11:05:41 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:14:33.101 11:05:41 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:14:33.101 11:05:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:33.101 11:05:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.101 11:05:41 -- json_config/json_config.sh@323 -- # killprocess 61370 00:14:33.101 11:05:41 -- common/autotest_common.sh@936 -- # '[' -z 61370 ']' 00:14:33.101 11:05:41 -- common/autotest_common.sh@940 -- # kill -0 61370 00:14:33.101 11:05:41 -- common/autotest_common.sh@941 -- # uname 00:14:33.101 11:05:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.101 11:05:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61370 00:14:33.101 killing process with pid 61370 00:14:33.101 11:05:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:33.101 11:05:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:33.101 11:05:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61370' 00:14:33.101 11:05:41 -- common/autotest_common.sh@955 -- # kill 61370 00:14:33.101 11:05:41 -- common/autotest_common.sh@960 -- # wait 61370 00:14:34.033 11:05:42 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:34.033 11:05:42 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:14:34.033 11:05:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:34.033 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.033 INFO: Success 00:14:34.033 11:05:42 -- json_config/json_config.sh@328 -- # return 0 00:14:34.033 11:05:42 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:14:34.033 ************************************ 00:14:34.033 END TEST json_config 00:14:34.033 ************************************ 00:14:34.033 00:14:34.033 real 0m10.655s 00:14:34.033 user 0m13.947s 00:14:34.033 sys 0m2.082s 00:14:34.033 11:05:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.033 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.033 11:05:42 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:34.033 11:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:34.033 11:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.033 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.291 ************************************ 00:14:34.291 START TEST json_config_extra_key 00:14:34.291 ************************************ 00:14:34.291 11:05:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:34.291 11:05:42 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.291 11:05:42 -- nvmf/common.sh@7 -- # uname -s 00:14:34.291 11:05:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.291 11:05:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.291 11:05:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.291 11:05:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.291 11:05:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.291 11:05:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.292 11:05:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.292 11:05:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.292 11:05:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.292 11:05:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.292 11:05:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:14:34.292 11:05:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:14:34.292 11:05:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.292 11:05:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.292 11:05:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:34.292 11:05:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.292 11:05:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.292 11:05:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.292 11:05:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.292 11:05:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.292 11:05:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.292 11:05:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.292 11:05:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.292 11:05:42 -- paths/export.sh@5 -- # export PATH 00:14:34.292 11:05:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.292 11:05:42 -- nvmf/common.sh@47 -- # : 0 00:14:34.292 11:05:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.292 11:05:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.292 11:05:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.292 11:05:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.292 11:05:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.292 11:05:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.292 11:05:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.292 11:05:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:34.292 INFO: launching applications... 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:34.292 11:05:42 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:34.292 11:05:42 -- json_config/common.sh@9 -- # local app=target 00:14:34.292 11:05:42 -- json_config/common.sh@10 -- # shift 00:14:34.292 11:05:42 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:34.292 11:05:42 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:34.292 11:05:42 -- json_config/common.sh@15 -- # local app_extra_params= 00:14:34.292 11:05:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:34.292 11:05:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:34.292 11:05:42 -- json_config/common.sh@22 -- # app_pid["$app"]=61563 00:14:34.292 11:05:42 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:34.292 11:05:42 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:34.292 Waiting for target to run... 00:14:34.292 11:05:42 -- json_config/common.sh@25 -- # waitforlisten 61563 /var/tmp/spdk_tgt.sock 00:14:34.292 11:05:42 -- common/autotest_common.sh@817 -- # '[' -z 61563 ']' 00:14:34.292 11:05:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:34.292 11:05:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:34.292 11:05:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:34.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:34.292 11:05:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:34.292 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.292 [2024-04-18 11:05:42.510943] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:34.549 [2024-04-18 11:05:42.511839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:14:34.806 [2024-04-18 11:05:42.991669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.065 [2024-04-18 11:05:43.206053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.631 11:05:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:35.631 00:14:35.631 INFO: shutting down applications... 00:14:35.631 11:05:43 -- common/autotest_common.sh@850 -- # return 0 00:14:35.631 11:05:43 -- json_config/common.sh@26 -- # echo '' 00:14:35.631 11:05:43 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:35.631 11:05:43 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:35.631 11:05:43 -- json_config/common.sh@31 -- # local app=target 00:14:35.631 11:05:43 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:35.631 11:05:43 -- json_config/common.sh@35 -- # [[ -n 61563 ]] 00:14:35.631 11:05:43 -- json_config/common.sh@38 -- # kill -SIGINT 61563 00:14:35.631 11:05:43 -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:35.631 11:05:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:35.631 11:05:43 -- json_config/common.sh@41 -- # kill -0 61563 00:14:35.631 11:05:43 -- json_config/common.sh@45 -- # sleep 0.5 00:14:36.195 11:05:44 -- json_config/common.sh@40 -- # (( i++ )) 00:14:36.195 11:05:44 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:36.195 11:05:44 -- json_config/common.sh@41 -- # kill -0 61563 00:14:36.195 11:05:44 -- json_config/common.sh@45 -- # sleep 0.5 00:14:36.759 11:05:44 -- json_config/common.sh@40 -- # (( i++ )) 00:14:36.759 11:05:44 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:36.759 11:05:44 -- json_config/common.sh@41 -- # kill -0 61563 00:14:36.759 11:05:44 -- json_config/common.sh@45 -- # sleep 0.5 00:14:37.326 11:05:45 -- json_config/common.sh@40 -- # (( i++ )) 00:14:37.326 11:05:45 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:37.326 11:05:45 -- json_config/common.sh@41 -- # kill -0 61563 00:14:37.326 11:05:45 -- json_config/common.sh@45 -- # sleep 0.5 00:14:37.891 11:05:45 -- json_config/common.sh@40 -- # (( i++ )) 00:14:37.891 11:05:45 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:37.891 11:05:45 -- json_config/common.sh@41 -- # kill -0 61563 00:14:37.891 11:05:45 -- json_config/common.sh@45 -- # sleep 0.5 00:14:38.149 11:05:46 -- json_config/common.sh@40 -- # (( i++ )) 00:14:38.149 11:05:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:38.149 11:05:46 -- json_config/common.sh@41 -- # kill -0 61563 00:14:38.149 11:05:46 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:38.149 SPDK target shutdown done 00:14:38.149 Success 00:14:38.149 11:05:46 -- json_config/common.sh@43 -- # break 00:14:38.149 11:05:46 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:38.149 11:05:46 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:38.149 11:05:46 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:38.149 ************************************ 00:14:38.149 END TEST json_config_extra_key 00:14:38.149 ************************************ 00:14:38.149 00:14:38.149 real 0m4.060s 00:14:38.149 user 0m3.872s 00:14:38.149 sys 0m0.622s 00:14:38.149 11:05:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.149 11:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.407 11:05:46 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:38.407 11:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:38.407 11:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.407 11:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.407 ************************************ 00:14:38.407 START TEST alias_rpc 00:14:38.407 ************************************ 00:14:38.407 11:05:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:38.407 * Looking for test storage... 00:14:38.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:14:38.407 11:05:46 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:38.407 11:05:46 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61682 00:14:38.407 11:05:46 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61682 00:14:38.407 11:05:46 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.407 11:05:46 -- common/autotest_common.sh@817 -- # '[' -z 61682 ']' 00:14:38.407 11:05:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.407 11:05:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.407 11:05:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.407 11:05:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.407 11:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.665 [2024-04-18 11:05:46.647670] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:38.665 [2024-04-18 11:05:46.647812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61682 ] 00:14:38.665 [2024-04-18 11:05:46.812210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.922 [2024-04-18 11:05:47.092532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.856 11:05:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:39.856 11:05:47 -- common/autotest_common.sh@850 -- # return 0 00:14:39.856 11:05:47 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:14:40.114 11:05:48 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61682 00:14:40.114 11:05:48 -- common/autotest_common.sh@936 -- # '[' -z 61682 ']' 00:14:40.114 11:05:48 -- common/autotest_common.sh@940 -- # kill -0 61682 00:14:40.114 11:05:48 -- common/autotest_common.sh@941 -- # uname 00:14:40.114 11:05:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.114 11:05:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61682 00:14:40.114 killing process with pid 61682 00:14:40.114 11:05:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:40.114 11:05:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:40.114 11:05:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61682' 00:14:40.114 11:05:48 -- common/autotest_common.sh@955 -- # kill 61682 00:14:40.114 11:05:48 -- common/autotest_common.sh@960 -- # wait 61682 00:14:42.642 ************************************ 00:14:42.642 END TEST alias_rpc 00:14:42.642 ************************************ 00:14:42.642 00:14:42.642 real 0m4.068s 00:14:42.642 user 0m4.154s 00:14:42.642 sys 0m0.589s 00:14:42.642 11:05:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:42.642 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:42.642 11:05:50 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:14:42.642 11:05:50 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:42.642 11:05:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:42.642 11:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.642 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:42.642 ************************************ 00:14:42.642 START TEST dpdk_mem_utility 00:14:42.642 ************************************ 00:14:42.642 11:05:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:42.642 * Looking for test storage... 00:14:42.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:14:42.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.642 11:05:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:42.642 11:05:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61804 00:14:42.642 11:05:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.642 11:05:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61804 00:14:42.642 11:05:50 -- common/autotest_common.sh@817 -- # '[' -z 61804 ']' 00:14:42.642 11:05:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.642 11:05:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.642 11:05:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.642 11:05:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.642 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:14:42.642 [2024-04-18 11:05:50.838516] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:42.642 [2024-04-18 11:05:50.838968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61804 ] 00:14:42.900 [2024-04-18 11:05:51.013262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.158 [2024-04-18 11:05:51.320979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.114 11:05:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.114 11:05:52 -- common/autotest_common.sh@850 -- # return 0 00:14:44.114 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:44.114 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:44.114 11:05:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.114 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:44.114 { 00:14:44.114 "filename": "/tmp/spdk_mem_dump.txt" 00:14:44.114 } 00:14:44.114 11:05:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.114 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:44.114 DPDK memory size 820.000000 MiB in 1 heap(s) 00:14:44.114 1 heaps totaling size 820.000000 MiB 00:14:44.114 size: 820.000000 MiB heap id: 0 00:14:44.114 end heaps---------- 00:14:44.114 8 mempools totaling size 598.116089 MiB 00:14:44.114 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:44.114 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:44.114 size: 84.521057 MiB name: bdev_io_61804 00:14:44.114 size: 51.011292 MiB name: evtpool_61804 00:14:44.114 size: 50.003479 MiB name: msgpool_61804 00:14:44.114 size: 21.763794 MiB name: PDU_Pool 00:14:44.114 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:44.114 size: 0.026123 MiB name: Session_Pool 00:14:44.114 end mempools------- 00:14:44.114 6 memzones totaling size 4.142822 MiB 00:14:44.114 size: 1.000366 MiB name: RG_ring_0_61804 00:14:44.114 size: 1.000366 MiB name: RG_ring_1_61804 00:14:44.114 size: 1.000366 MiB name: RG_ring_4_61804 00:14:44.114 size: 1.000366 MiB name: RG_ring_5_61804 00:14:44.114 size: 0.125366 MiB name: RG_ring_2_61804 00:14:44.114 size: 0.015991 MiB name: RG_ring_3_61804 00:14:44.114 end memzones------- 00:14:44.114 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:14:44.114 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:14:44.114 list of free elements. size: 18.469482 MiB 00:14:44.114 element at address: 0x200000400000 with size: 1.999451 MiB 00:14:44.114 element at address: 0x200000800000 with size: 1.996887 MiB 00:14:44.114 element at address: 0x200007000000 with size: 1.995972 MiB 00:14:44.114 element at address: 0x20000b200000 with size: 1.995972 MiB 00:14:44.114 element at address: 0x200019100040 with size: 0.999939 MiB 00:14:44.114 element at address: 0x200019500040 with size: 0.999939 MiB 00:14:44.114 element at address: 0x200019600000 with size: 0.999329 MiB 00:14:44.114 element at address: 0x200003e00000 with size: 0.996094 MiB 00:14:44.114 element at address: 0x200032200000 with size: 0.994324 MiB 00:14:44.114 element at address: 0x200018e00000 with size: 0.959656 MiB 00:14:44.114 element at address: 0x200019900040 with size: 0.937256 MiB 00:14:44.114 element at address: 0x200000200000 with size: 0.834351 MiB 00:14:44.114 element at address: 0x20001b000000 with size: 0.568542 MiB 00:14:44.114 element at address: 0x200019200000 with size: 0.488708 MiB 00:14:44.114 element at address: 0x200019a00000 with size: 0.485413 MiB 00:14:44.114 element at address: 0x200013800000 with size: 0.468872 MiB 00:14:44.114 element at address: 0x200028400000 with size: 0.392639 MiB 00:14:44.114 element at address: 0x200003a00000 with size: 0.356140 MiB 00:14:44.114 list of standard malloc elements. size: 199.266113 MiB 00:14:44.114 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:14:44.114 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:14:44.114 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:14:44.114 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:14:44.114 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:14:44.114 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:14:44.114 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:14:44.114 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:14:44.114 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:14:44.114 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:14:44.114 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:14:44.114 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:14:44.114 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200003aff980 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200003affa80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200003eff000 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878080 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878180 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878280 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878380 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878480 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200013878580 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200019abc680 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200028464840 with size: 0.000244 MiB 00:14:44.115 element at address: 0x200028464940 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846b600 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846b880 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846b980 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846be80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c080 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c180 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c280 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c380 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c480 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c580 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c680 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c780 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c880 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846c980 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d080 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d180 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d280 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d380 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d480 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d580 with size: 0.000244 MiB 00:14:44.115 element at address: 0x20002846d680 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846d780 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846d880 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846d980 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846da80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846db80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846de80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846df80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e080 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e180 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e280 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e380 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e480 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e580 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e680 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e780 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e880 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846e980 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f080 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f180 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f280 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f380 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f480 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f580 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f680 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f780 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f880 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846f980 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:14:44.116 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:14:44.116 list of memzone associated elements. size: 602.264404 MiB 00:14:44.116 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:14:44.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:44.116 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:14:44.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:44.116 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:14:44.116 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61804_0 00:14:44.116 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:14:44.116 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61804_0 00:14:44.116 element at address: 0x200003fff340 with size: 48.003113 MiB 00:14:44.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61804_0 00:14:44.116 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:14:44.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:44.116 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:14:44.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:44.116 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:14:44.116 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61804 00:14:44.116 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:14:44.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61804 00:14:44.116 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:14:44.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61804 00:14:44.116 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:14:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:44.116 element at address: 0x200019abc780 with size: 1.008179 MiB 00:14:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:44.116 element at address: 0x200018efde00 with size: 1.008179 MiB 00:14:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:44.116 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:14:44.116 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:44.116 element at address: 0x200003eff100 with size: 1.000549 MiB 00:14:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61804 00:14:44.116 element at address: 0x200003affb80 with size: 1.000549 MiB 00:14:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61804 00:14:44.116 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:14:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61804 00:14:44.116 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:14:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61804 00:14:44.116 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:14:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61804 00:14:44.116 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:14:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:44.116 element at address: 0x200013878680 with size: 0.500549 MiB 00:14:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:44.116 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:14:44.116 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:44.116 element at address: 0x200003adf740 with size: 0.125549 MiB 00:14:44.116 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61804 00:14:44.116 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:14:44.116 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:44.116 element at address: 0x200028464a40 with size: 0.023804 MiB 00:14:44.116 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:44.116 element at address: 0x200003adb500 with size: 0.016174 MiB 00:14:44.116 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61804 00:14:44.116 element at address: 0x20002846abc0 with size: 0.002502 MiB 00:14:44.116 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:44.116 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:14:44.116 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61804 00:14:44.116 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:14:44.116 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61804 00:14:44.116 element at address: 0x20002846b700 with size: 0.000366 MiB 00:14:44.116 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:44.116 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:44.116 11:05:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61804 00:14:44.116 11:05:52 -- common/autotest_common.sh@936 -- # '[' -z 61804 ']' 00:14:44.116 11:05:52 -- common/autotest_common.sh@940 -- # kill -0 61804 00:14:44.116 11:05:52 -- common/autotest_common.sh@941 -- # uname 00:14:44.373 11:05:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.373 11:05:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61804 00:14:44.373 11:05:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:44.373 11:05:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:44.373 11:05:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61804' 00:14:44.373 killing process with pid 61804 00:14:44.373 11:05:52 -- common/autotest_common.sh@955 -- # kill 61804 00:14:44.373 11:05:52 -- common/autotest_common.sh@960 -- # wait 61804 00:14:46.912 00:14:46.912 real 0m4.011s 00:14:46.912 user 0m4.051s 00:14:46.912 sys 0m0.629s 00:14:46.912 11:05:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.912 ************************************ 00:14:46.912 END TEST dpdk_mem_utility 00:14:46.912 ************************************ 00:14:46.912 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:46.912 11:05:54 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:46.912 11:05:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.912 11:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.912 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:46.912 ************************************ 00:14:46.912 START TEST event 00:14:46.912 ************************************ 00:14:46.912 11:05:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:46.912 * Looking for test storage... 00:14:46.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:46.912 11:05:54 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:46.912 11:05:54 -- bdev/nbd_common.sh@6 -- # set -e 00:14:46.912 11:05:54 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:46.912 11:05:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:46.912 11:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.912 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:46.912 ************************************ 00:14:46.912 START TEST event_perf 00:14:46.912 ************************************ 00:14:46.912 11:05:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:46.912 Running I/O for 1 seconds...[2024-04-18 11:05:54.973672] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:46.912 [2024-04-18 11:05:54.973983] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61931 ] 00:14:47.170 [2024-04-18 11:05:55.151266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.427 [2024-04-18 11:05:55.431601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.427 [2024-04-18 11:05:55.431737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.427 [2024-04-18 11:05:55.431856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.427 Running I/O for 1 seconds...[2024-04-18 11:05:55.431874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.801 00:14:48.801 lcore 0: 192233 00:14:48.801 lcore 1: 192231 00:14:48.801 lcore 2: 192231 00:14:48.801 lcore 3: 192231 00:14:48.801 done. 00:14:48.801 00:14:48.801 real 0m1.870s 00:14:48.801 ************************************ 00:14:48.801 END TEST event_perf 00:14:48.801 ************************************ 00:14:48.801 user 0m4.603s 00:14:48.801 sys 0m0.139s 00:14:48.801 11:05:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.801 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:48.801 11:05:56 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:48.802 11:05:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:48.802 11:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.802 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:48.802 ************************************ 00:14:48.802 START TEST event_reactor 00:14:48.802 ************************************ 00:14:48.802 11:05:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:48.802 [2024-04-18 11:05:56.941472] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:48.802 [2024-04-18 11:05:56.941617] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:14:49.060 [2024-04-18 11:05:57.104705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.318 [2024-04-18 11:05:57.330573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.744 test_start 00:14:50.744 oneshot 00:14:50.744 tick 100 00:14:50.744 tick 100 00:14:50.744 tick 250 00:14:50.744 tick 100 00:14:50.744 tick 100 00:14:50.744 tick 100 00:14:50.744 tick 250 00:14:50.744 tick 500 00:14:50.744 tick 100 00:14:50.744 tick 100 00:14:50.744 tick 250 00:14:50.744 tick 100 00:14:50.744 tick 100 00:14:50.744 test_end 00:14:50.744 00:14:50.744 real 0m1.784s 00:14:50.744 user 0m1.573s 00:14:50.744 sys 0m0.103s 00:14:50.744 11:05:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.744 11:05:58 -- common/autotest_common.sh@10 -- # set +x 00:14:50.744 ************************************ 00:14:50.744 END TEST event_reactor 00:14:50.744 ************************************ 00:14:50.744 11:05:58 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:50.744 11:05:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:50.744 11:05:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.744 11:05:58 -- common/autotest_common.sh@10 -- # set +x 00:14:50.744 ************************************ 00:14:50.744 START TEST event_reactor_perf 00:14:50.744 ************************************ 00:14:50.744 11:05:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:50.744 [2024-04-18 11:05:58.850044] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:50.744 [2024-04-18 11:05:58.850243] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:14:51.003 [2024-04-18 11:05:59.026161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.261 [2024-04-18 11:05:59.300754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.634 test_start 00:14:52.634 test_end 00:14:52.634 Performance: 274468 events per second 00:14:52.634 00:14:52.634 real 0m1.863s 00:14:52.634 user 0m1.640s 00:14:52.634 sys 0m0.113s 00:14:52.634 11:06:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.634 11:06:00 -- common/autotest_common.sh@10 -- # set +x 00:14:52.634 ************************************ 00:14:52.634 END TEST event_reactor_perf 00:14:52.634 ************************************ 00:14:52.634 11:06:00 -- event/event.sh@49 -- # uname -s 00:14:52.634 11:06:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:52.634 11:06:00 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:52.634 11:06:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.634 11:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.634 11:06:00 -- common/autotest_common.sh@10 -- # set +x 00:14:52.634 ************************************ 00:14:52.634 START TEST event_scheduler 00:14:52.634 ************************************ 00:14:52.634 11:06:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:52.634 * Looking for test storage... 00:14:52.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:14:52.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.893 11:06:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:52.893 11:06:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62096 00:14:52.893 11:06:00 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:52.893 11:06:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:52.893 11:06:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 62096 00:14:52.893 11:06:00 -- common/autotest_common.sh@817 -- # '[' -z 62096 ']' 00:14:52.893 11:06:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.893 11:06:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.893 11:06:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.893 11:06:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.893 11:06:00 -- common/autotest_common.sh@10 -- # set +x 00:14:52.893 [2024-04-18 11:06:00.963522] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:52.893 [2024-04-18 11:06:00.963952] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:14:53.151 [2024-04-18 11:06:01.138833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.408 [2024-04-18 11:06:01.446376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.408 [2024-04-18 11:06:01.446461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.408 [2024-04-18 11:06:01.446563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.408 [2024-04-18 11:06:01.446589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.667 11:06:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.667 11:06:01 -- common/autotest_common.sh@850 -- # return 0 00:14:53.667 11:06:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:53.667 11:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.667 11:06:01 -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 POWER: Env isn't set yet! 00:14:53.667 POWER: Attempting to initialise ACPI cpufreq power management... 00:14:53.667 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:53.667 POWER: Cannot set governor of lcore 0 to userspace 00:14:53.667 POWER: Attempting to initialise PSTAT power management... 00:14:53.667 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:53.667 POWER: Cannot set governor of lcore 0 to performance 00:14:53.667 POWER: Attempting to initialise AMD PSTATE power management... 00:14:53.667 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:53.667 POWER: Cannot set governor of lcore 0 to userspace 00:14:53.667 POWER: Attempting to initialise CPPC power management... 00:14:53.667 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:53.667 POWER: Cannot set governor of lcore 0 to userspace 00:14:53.667 POWER: Attempting to initialise VM power management... 00:14:53.667 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:14:53.667 POWER: Unable to set Power Management Environment for lcore 0 00:14:53.667 [2024-04-18 11:06:01.861062] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:14:53.667 [2024-04-18 11:06:01.861087] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:14:53.667 [2024-04-18 11:06:01.861100] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:14:53.667 11:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.667 11:06:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:53.667 11:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.667 11:06:01 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 [2024-04-18 11:06:02.197292] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:54.233 11:06:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.233 11:06:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 ************************************ 00:14:54.233 START TEST scheduler_create_thread 00:14:54.233 ************************************ 00:14:54.233 11:06:02 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 2 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 3 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 4 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 5 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 6 00:14:54.233 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.233 11:06:02 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:54.233 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.233 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.233 7 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 8 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 9 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 10 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.234 11:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.234 11:06:02 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:54.234 11:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.234 11:06:02 -- common/autotest_common.sh@10 -- # set +x 00:14:55.608 11:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:55.866 11:06:03 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:55.866 11:06:03 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:55.866 11:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:55.866 11:06:03 -- common/autotest_common.sh@10 -- # set +x 00:14:56.801 ************************************ 00:14:56.801 END TEST scheduler_create_thread 00:14:56.801 ************************************ 00:14:56.801 11:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.801 00:14:56.801 real 0m2.621s 00:14:56.801 user 0m0.013s 00:14:56.801 sys 0m0.011s 00:14:56.801 11:06:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.801 11:06:04 -- common/autotest_common.sh@10 -- # set +x 00:14:56.801 11:06:04 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:56.801 11:06:04 -- scheduler/scheduler.sh@46 -- # killprocess 62096 00:14:56.801 11:06:04 -- common/autotest_common.sh@936 -- # '[' -z 62096 ']' 00:14:56.801 11:06:04 -- common/autotest_common.sh@940 -- # kill -0 62096 00:14:56.801 11:06:04 -- common/autotest_common.sh@941 -- # uname 00:14:56.801 11:06:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.801 11:06:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62096 00:14:56.801 killing process with pid 62096 00:14:56.801 11:06:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:56.801 11:06:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:56.801 11:06:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62096' 00:14:56.801 11:06:04 -- common/autotest_common.sh@955 -- # kill 62096 00:14:56.801 11:06:04 -- common/autotest_common.sh@960 -- # wait 62096 00:14:57.059 [2024-04-18 11:06:05.272292] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:58.433 00:14:58.433 real 0m5.710s 00:14:58.433 user 0m11.197s 00:14:58.433 sys 0m0.561s 00:14:58.433 11:06:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.433 ************************************ 00:14:58.433 11:06:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 END TEST event_scheduler 00:14:58.433 ************************************ 00:14:58.433 11:06:06 -- event/event.sh@51 -- # modprobe -n nbd 00:14:58.433 11:06:06 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:58.433 11:06:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.433 11:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.433 11:06:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 ************************************ 00:14:58.433 START TEST app_repeat 00:14:58.433 ************************************ 00:14:58.433 11:06:06 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:14:58.433 11:06:06 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.433 11:06:06 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.433 11:06:06 -- event/event.sh@13 -- # local nbd_list 00:14:58.433 11:06:06 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:58.433 11:06:06 -- event/event.sh@14 -- # local bdev_list 00:14:58.433 11:06:06 -- event/event.sh@15 -- # local repeat_times=4 00:14:58.433 11:06:06 -- event/event.sh@17 -- # modprobe nbd 00:14:58.433 11:06:06 -- event/event.sh@19 -- # repeat_pid=62233 00:14:58.433 11:06:06 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:58.433 11:06:06 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:58.433 Process app_repeat pid: 62233 00:14:58.433 11:06:06 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62233' 00:14:58.433 11:06:06 -- event/event.sh@23 -- # for i in {0..2} 00:14:58.433 spdk_app_start Round 0 00:14:58.433 11:06:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:58.433 11:06:06 -- event/event.sh@25 -- # waitforlisten 62233 /var/tmp/spdk-nbd.sock 00:14:58.433 11:06:06 -- common/autotest_common.sh@817 -- # '[' -z 62233 ']' 00:14:58.433 11:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:58.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:58.433 11:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.433 11:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:58.433 11:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.433 11:06:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.691 [2024-04-18 11:06:06.666705] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:58.691 [2024-04-18 11:06:06.666880] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62233 ] 00:14:58.691 [2024-04-18 11:06:06.829843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:58.949 [2024-04-18 11:06:07.080826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.949 [2024-04-18 11:06:07.080842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.516 11:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.516 11:06:07 -- common/autotest_common.sh@850 -- # return 0 00:14:59.516 11:06:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:00.081 Malloc0 00:15:00.081 11:06:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:00.339 Malloc1 00:15:00.339 11:06:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@12 -- # local i 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.339 11:06:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:00.598 /dev/nbd0 00:15:00.598 11:06:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.598 11:06:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.598 11:06:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:00.598 11:06:08 -- common/autotest_common.sh@855 -- # local i 00:15:00.598 11:06:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:00.598 11:06:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:00.598 11:06:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:00.598 11:06:08 -- common/autotest_common.sh@859 -- # break 00:15:00.598 11:06:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:00.598 11:06:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:00.598 11:06:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:00.598 1+0 records in 00:15:00.598 1+0 records out 00:15:00.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003338 s, 12.3 MB/s 00:15:00.598 11:06:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:00.598 11:06:08 -- common/autotest_common.sh@872 -- # size=4096 00:15:00.598 11:06:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:00.598 11:06:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:00.598 11:06:08 -- common/autotest_common.sh@875 -- # return 0 00:15:00.598 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.598 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.598 11:06:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:00.855 /dev/nbd1 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.855 11:06:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:00.855 11:06:08 -- common/autotest_common.sh@855 -- # local i 00:15:00.855 11:06:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:00.855 11:06:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:00.855 11:06:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:00.855 11:06:08 -- common/autotest_common.sh@859 -- # break 00:15:00.855 11:06:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:00.855 11:06:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:00.855 11:06:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:00.855 1+0 records in 00:15:00.855 1+0 records out 00:15:00.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317762 s, 12.9 MB/s 00:15:00.855 11:06:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:00.855 11:06:08 -- common/autotest_common.sh@872 -- # size=4096 00:15:00.855 11:06:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:00.855 11:06:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:00.855 11:06:08 -- common/autotest_common.sh@875 -- # return 0 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.855 11:06:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:01.115 { 00:15:01.115 "bdev_name": "Malloc0", 00:15:01.115 "nbd_device": "/dev/nbd0" 00:15:01.115 }, 00:15:01.115 { 00:15:01.115 "bdev_name": "Malloc1", 00:15:01.115 "nbd_device": "/dev/nbd1" 00:15:01.115 } 00:15:01.115 ]' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:01.115 { 00:15:01.115 "bdev_name": "Malloc0", 00:15:01.115 "nbd_device": "/dev/nbd0" 00:15:01.115 }, 00:15:01.115 { 00:15:01.115 "bdev_name": "Malloc1", 00:15:01.115 "nbd_device": "/dev/nbd1" 00:15:01.115 } 00:15:01.115 ]' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:01.115 /dev/nbd1' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:01.115 /dev/nbd1' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@65 -- # count=2 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@95 -- # count=2 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:01.115 256+0 records in 00:15:01.115 256+0 records out 00:15:01.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007855 s, 133 MB/s 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:01.115 256+0 records in 00:15:01.115 256+0 records out 00:15:01.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297517 s, 35.2 MB/s 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:01.115 11:06:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:01.372 256+0 records in 00:15:01.372 256+0 records out 00:15:01.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290597 s, 36.1 MB/s 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@51 -- # local i 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.372 11:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:01.629 11:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.629 11:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.629 11:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@41 -- # break 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.630 11:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@41 -- # break 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.887 11:06:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@65 -- # true 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@65 -- # count=0 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@104 -- # count=0 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:02.144 11:06:10 -- bdev/nbd_common.sh@109 -- # return 0 00:15:02.144 11:06:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:02.710 11:06:10 -- event/event.sh@35 -- # sleep 3 00:15:04.085 [2024-04-18 11:06:11.963771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:04.085 [2024-04-18 11:06:12.203793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.085 [2024-04-18 11:06:12.203799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.343 [2024-04-18 11:06:12.403890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:04.343 [2024-04-18 11:06:12.403963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:05.737 11:06:13 -- event/event.sh@23 -- # for i in {0..2} 00:15:05.737 spdk_app_start Round 1 00:15:05.737 11:06:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:05.737 11:06:13 -- event/event.sh@25 -- # waitforlisten 62233 /var/tmp/spdk-nbd.sock 00:15:05.737 11:06:13 -- common/autotest_common.sh@817 -- # '[' -z 62233 ']' 00:15:05.737 11:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:05.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:05.737 11:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.737 11:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:05.737 11:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.737 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:15:05.996 11:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.996 11:06:14 -- common/autotest_common.sh@850 -- # return 0 00:15:05.996 11:06:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:06.255 Malloc0 00:15:06.255 11:06:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:06.821 Malloc1 00:15:06.821 11:06:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@12 -- # local i 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.821 11:06:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:06.821 /dev/nbd0 00:15:06.821 11:06:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.821 11:06:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.821 11:06:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:06.821 11:06:15 -- common/autotest_common.sh@855 -- # local i 00:15:06.821 11:06:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:06.821 11:06:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:06.821 11:06:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:06.821 11:06:15 -- common/autotest_common.sh@859 -- # break 00:15:06.821 11:06:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:06.821 11:06:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:06.821 11:06:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:06.821 1+0 records in 00:15:06.821 1+0 records out 00:15:06.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272505 s, 15.0 MB/s 00:15:06.821 11:06:15 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:06.821 11:06:15 -- common/autotest_common.sh@872 -- # size=4096 00:15:06.821 11:06:15 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:07.079 11:06:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:07.079 11:06:15 -- common/autotest_common.sh@875 -- # return 0 00:15:07.079 11:06:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.079 11:06:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.079 11:06:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:07.337 /dev/nbd1 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.337 11:06:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:07.337 11:06:15 -- common/autotest_common.sh@855 -- # local i 00:15:07.337 11:06:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:07.337 11:06:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:07.337 11:06:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:07.337 11:06:15 -- common/autotest_common.sh@859 -- # break 00:15:07.337 11:06:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:07.337 11:06:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:07.337 11:06:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:07.337 1+0 records in 00:15:07.337 1+0 records out 00:15:07.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236056 s, 17.4 MB/s 00:15:07.337 11:06:15 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:07.337 11:06:15 -- common/autotest_common.sh@872 -- # size=4096 00:15:07.337 11:06:15 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:07.337 11:06:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:07.337 11:06:15 -- common/autotest_common.sh@875 -- # return 0 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.337 11:06:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:07.595 { 00:15:07.595 "bdev_name": "Malloc0", 00:15:07.595 "nbd_device": "/dev/nbd0" 00:15:07.595 }, 00:15:07.595 { 00:15:07.595 "bdev_name": "Malloc1", 00:15:07.595 "nbd_device": "/dev/nbd1" 00:15:07.595 } 00:15:07.595 ]' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:07.595 { 00:15:07.595 "bdev_name": "Malloc0", 00:15:07.595 "nbd_device": "/dev/nbd0" 00:15:07.595 }, 00:15:07.595 { 00:15:07.595 "bdev_name": "Malloc1", 00:15:07.595 "nbd_device": "/dev/nbd1" 00:15:07.595 } 00:15:07.595 ]' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:07.595 /dev/nbd1' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:07.595 /dev/nbd1' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@65 -- # count=2 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@95 -- # count=2 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:07.595 256+0 records in 00:15:07.595 256+0 records out 00:15:07.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00761058 s, 138 MB/s 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:07.595 256+0 records in 00:15:07.595 256+0 records out 00:15:07.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337615 s, 31.1 MB/s 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:07.595 256+0 records in 00:15:07.595 256+0 records out 00:15:07.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0393427 s, 26.7 MB/s 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@51 -- # local i 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.595 11:06:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:08.160 11:06:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@41 -- # break 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.161 11:06:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@41 -- # break 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@65 -- # true 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@65 -- # count=0 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@104 -- # count=0 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:08.443 11:06:16 -- bdev/nbd_common.sh@109 -- # return 0 00:15:08.443 11:06:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:09.008 11:06:17 -- event/event.sh@35 -- # sleep 3 00:15:10.383 [2024-04-18 11:06:18.497996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:10.642 [2024-04-18 11:06:18.762934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.642 [2024-04-18 11:06:18.762944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.900 [2024-04-18 11:06:18.962557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:10.900 [2024-04-18 11:06:18.962680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:12.275 spdk_app_start Round 2 00:15:12.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:12.276 11:06:20 -- event/event.sh@23 -- # for i in {0..2} 00:15:12.276 11:06:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:12.276 11:06:20 -- event/event.sh@25 -- # waitforlisten 62233 /var/tmp/spdk-nbd.sock 00:15:12.276 11:06:20 -- common/autotest_common.sh@817 -- # '[' -z 62233 ']' 00:15:12.276 11:06:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:12.276 11:06:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.276 11:06:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:12.276 11:06:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.276 11:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:12.276 11:06:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.276 11:06:20 -- common/autotest_common.sh@850 -- # return 0 00:15:12.276 11:06:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:12.534 Malloc0 00:15:12.792 11:06:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:13.051 Malloc1 00:15:13.051 11:06:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@12 -- # local i 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.051 11:06:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:13.309 /dev/nbd0 00:15:13.309 11:06:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.309 11:06:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.309 11:06:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:13.309 11:06:21 -- common/autotest_common.sh@855 -- # local i 00:15:13.309 11:06:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:13.309 11:06:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:13.309 11:06:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:13.309 11:06:21 -- common/autotest_common.sh@859 -- # break 00:15:13.309 11:06:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:13.309 11:06:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:13.309 11:06:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:13.309 1+0 records in 00:15:13.309 1+0 records out 00:15:13.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247935 s, 16.5 MB/s 00:15:13.309 11:06:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:13.309 11:06:21 -- common/autotest_common.sh@872 -- # size=4096 00:15:13.309 11:06:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:13.309 11:06:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:13.309 11:06:21 -- common/autotest_common.sh@875 -- # return 0 00:15:13.309 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.309 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.309 11:06:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:13.568 /dev/nbd1 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.568 11:06:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:13.568 11:06:21 -- common/autotest_common.sh@855 -- # local i 00:15:13.568 11:06:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:13.568 11:06:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:13.568 11:06:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:13.568 11:06:21 -- common/autotest_common.sh@859 -- # break 00:15:13.568 11:06:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:13.568 11:06:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:13.568 11:06:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:13.568 1+0 records in 00:15:13.568 1+0 records out 00:15:13.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336835 s, 12.2 MB/s 00:15:13.568 11:06:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:13.568 11:06:21 -- common/autotest_common.sh@872 -- # size=4096 00:15:13.568 11:06:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:13.568 11:06:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:13.568 11:06:21 -- common/autotest_common.sh@875 -- # return 0 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.568 11:06:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:13.827 { 00:15:13.827 "bdev_name": "Malloc0", 00:15:13.827 "nbd_device": "/dev/nbd0" 00:15:13.827 }, 00:15:13.827 { 00:15:13.827 "bdev_name": "Malloc1", 00:15:13.827 "nbd_device": "/dev/nbd1" 00:15:13.827 } 00:15:13.827 ]' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:13.827 { 00:15:13.827 "bdev_name": "Malloc0", 00:15:13.827 "nbd_device": "/dev/nbd0" 00:15:13.827 }, 00:15:13.827 { 00:15:13.827 "bdev_name": "Malloc1", 00:15:13.827 "nbd_device": "/dev/nbd1" 00:15:13.827 } 00:15:13.827 ]' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:13.827 /dev/nbd1' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:13.827 /dev/nbd1' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@65 -- # count=2 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@95 -- # count=2 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:13.827 256+0 records in 00:15:13.827 256+0 records out 00:15:13.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720825 s, 145 MB/s 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:13.827 256+0 records in 00:15:13.827 256+0 records out 00:15:13.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323308 s, 32.4 MB/s 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:13.827 11:06:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:13.827 256+0 records in 00:15:13.827 256+0 records out 00:15:13.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0356476 s, 29.4 MB/s 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@51 -- # local i 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.827 11:06:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@41 -- # break 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.394 11:06:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@41 -- # break 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:14.652 11:06:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@65 -- # true 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@65 -- # count=0 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@104 -- # count=0 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:14.910 11:06:22 -- bdev/nbd_common.sh@109 -- # return 0 00:15:14.910 11:06:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:15.475 11:06:23 -- event/event.sh@35 -- # sleep 3 00:15:16.849 [2024-04-18 11:06:24.691530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.849 [2024-04-18 11:06:24.923258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.849 [2024-04-18 11:06:24.923267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.107 [2024-04-18 11:06:25.114343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:17.107 [2024-04-18 11:06:25.114413] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:18.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:18.483 11:06:26 -- event/event.sh@38 -- # waitforlisten 62233 /var/tmp/spdk-nbd.sock 00:15:18.483 11:06:26 -- common/autotest_common.sh@817 -- # '[' -z 62233 ']' 00:15:18.483 11:06:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:18.483 11:06:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.483 11:06:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:18.483 11:06:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.483 11:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:18.483 11:06:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.483 11:06:26 -- common/autotest_common.sh@850 -- # return 0 00:15:18.483 11:06:26 -- event/event.sh@39 -- # killprocess 62233 00:15:18.483 11:06:26 -- common/autotest_common.sh@936 -- # '[' -z 62233 ']' 00:15:18.483 11:06:26 -- common/autotest_common.sh@940 -- # kill -0 62233 00:15:18.483 11:06:26 -- common/autotest_common.sh@941 -- # uname 00:15:18.483 11:06:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.483 11:06:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62233 00:15:18.483 killing process with pid 62233 00:15:18.483 11:06:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.483 11:06:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.483 11:06:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62233' 00:15:18.483 11:06:26 -- common/autotest_common.sh@955 -- # kill 62233 00:15:18.484 11:06:26 -- common/autotest_common.sh@960 -- # wait 62233 00:15:19.859 spdk_app_start is called in Round 0. 00:15:19.859 Shutdown signal received, stop current app iteration 00:15:19.859 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:15:19.859 spdk_app_start is called in Round 1. 00:15:19.859 Shutdown signal received, stop current app iteration 00:15:19.859 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:15:19.859 spdk_app_start is called in Round 2. 00:15:19.859 Shutdown signal received, stop current app iteration 00:15:19.859 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:15:19.859 spdk_app_start is called in Round 3. 00:15:19.859 Shutdown signal received, stop current app iteration 00:15:19.859 11:06:27 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:19.859 11:06:27 -- event/event.sh@42 -- # return 0 00:15:19.859 00:15:19.859 real 0m21.153s 00:15:19.859 user 0m45.143s 00:15:19.859 sys 0m3.217s 00:15:19.859 11:06:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.859 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 ************************************ 00:15:19.859 END TEST app_repeat 00:15:19.859 ************************************ 00:15:19.859 11:06:27 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:19.859 11:06:27 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:19.859 11:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:19.859 11:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.859 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 ************************************ 00:15:19.859 START TEST cpu_locks 00:15:19.859 ************************************ 00:15:19.859 11:06:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:19.859 * Looking for test storage... 00:15:19.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:19.859 11:06:27 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:19.859 11:06:27 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:19.859 11:06:27 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:19.859 11:06:27 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:19.859 11:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:19.859 11:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.859 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 ************************************ 00:15:19.859 START TEST default_locks 00:15:19.859 ************************************ 00:15:19.859 11:06:28 -- common/autotest_common.sh@1111 -- # default_locks 00:15:19.859 11:06:28 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62889 00:15:19.859 11:06:28 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:19.859 11:06:28 -- event/cpu_locks.sh@47 -- # waitforlisten 62889 00:15:19.859 11:06:28 -- common/autotest_common.sh@817 -- # '[' -z 62889 ']' 00:15:19.859 11:06:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.859 11:06:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.859 11:06:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.859 11:06:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.859 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:15:20.117 [2024-04-18 11:06:28.180805] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:20.117 [2024-04-18 11:06:28.180979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:15:20.375 [2024-04-18 11:06:28.355732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.375 [2024-04-18 11:06:28.595341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.310 11:06:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:21.310 11:06:29 -- common/autotest_common.sh@850 -- # return 0 00:15:21.310 11:06:29 -- event/cpu_locks.sh@49 -- # locks_exist 62889 00:15:21.310 11:06:29 -- event/cpu_locks.sh@22 -- # lslocks -p 62889 00:15:21.310 11:06:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:21.877 11:06:29 -- event/cpu_locks.sh@50 -- # killprocess 62889 00:15:21.877 11:06:29 -- common/autotest_common.sh@936 -- # '[' -z 62889 ']' 00:15:21.877 11:06:29 -- common/autotest_common.sh@940 -- # kill -0 62889 00:15:21.877 11:06:29 -- common/autotest_common.sh@941 -- # uname 00:15:21.877 11:06:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:21.877 11:06:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62889 00:15:21.877 11:06:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:21.877 killing process with pid 62889 00:15:21.877 11:06:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:21.877 11:06:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62889' 00:15:21.877 11:06:29 -- common/autotest_common.sh@955 -- # kill 62889 00:15:21.877 11:06:29 -- common/autotest_common.sh@960 -- # wait 62889 00:15:24.405 11:06:32 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62889 00:15:24.405 11:06:32 -- common/autotest_common.sh@638 -- # local es=0 00:15:24.405 11:06:32 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62889 00:15:24.405 11:06:32 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:24.405 11:06:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:24.405 11:06:32 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:24.405 11:06:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:24.405 11:06:32 -- common/autotest_common.sh@641 -- # waitforlisten 62889 00:15:24.405 11:06:32 -- common/autotest_common.sh@817 -- # '[' -z 62889 ']' 00:15:24.405 11:06:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.405 ERROR: process (pid: 62889) is no longer running 00:15:24.405 11:06:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.405 11:06:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.405 11:06:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.405 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62889) - No such process 00:15:24.405 11:06:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:24.405 11:06:32 -- common/autotest_common.sh@850 -- # return 1 00:15:24.405 11:06:32 -- common/autotest_common.sh@641 -- # es=1 00:15:24.405 11:06:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:24.405 11:06:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:24.405 11:06:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:24.405 11:06:32 -- event/cpu_locks.sh@54 -- # no_locks 00:15:24.405 11:06:32 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:24.405 11:06:32 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:24.405 11:06:32 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:24.405 00:15:24.405 real 0m4.037s 00:15:24.405 user 0m4.041s 00:15:24.405 sys 0m0.737s 00:15:24.405 11:06:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.405 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.405 ************************************ 00:15:24.405 END TEST default_locks 00:15:24.405 ************************************ 00:15:24.405 11:06:32 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:24.405 11:06:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:24.405 11:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.405 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.405 ************************************ 00:15:24.405 START TEST default_locks_via_rpc 00:15:24.405 ************************************ 00:15:24.405 11:06:32 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:15:24.405 11:06:32 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62982 00:15:24.405 11:06:32 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:24.405 11:06:32 -- event/cpu_locks.sh@63 -- # waitforlisten 62982 00:15:24.405 11:06:32 -- common/autotest_common.sh@817 -- # '[' -z 62982 ']' 00:15:24.405 11:06:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.405 11:06:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.405 11:06:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.405 11:06:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.405 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.405 [2024-04-18 11:06:32.339993] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:24.405 [2024-04-18 11:06:32.340418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62982 ] 00:15:24.405 [2024-04-18 11:06:32.506784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.663 [2024-04-18 11:06:32.787652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.597 11:06:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:25.597 11:06:33 -- common/autotest_common.sh@850 -- # return 0 00:15:25.597 11:06:33 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:25.597 11:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.597 11:06:33 -- common/autotest_common.sh@10 -- # set +x 00:15:25.597 11:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.597 11:06:33 -- event/cpu_locks.sh@67 -- # no_locks 00:15:25.597 11:06:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:25.597 11:06:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:25.597 11:06:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:25.597 11:06:33 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:25.597 11:06:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.597 11:06:33 -- common/autotest_common.sh@10 -- # set +x 00:15:25.597 11:06:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.597 11:06:33 -- event/cpu_locks.sh@71 -- # locks_exist 62982 00:15:25.597 11:06:33 -- event/cpu_locks.sh@22 -- # lslocks -p 62982 00:15:25.597 11:06:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:25.855 11:06:34 -- event/cpu_locks.sh@73 -- # killprocess 62982 00:15:25.855 11:06:34 -- common/autotest_common.sh@936 -- # '[' -z 62982 ']' 00:15:25.855 11:06:34 -- common/autotest_common.sh@940 -- # kill -0 62982 00:15:25.855 11:06:34 -- common/autotest_common.sh@941 -- # uname 00:15:25.855 11:06:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.855 11:06:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62982 00:15:26.114 killing process with pid 62982 00:15:26.114 11:06:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.114 11:06:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.114 11:06:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62982' 00:15:26.114 11:06:34 -- common/autotest_common.sh@955 -- # kill 62982 00:15:26.114 11:06:34 -- common/autotest_common.sh@960 -- # wait 62982 00:15:28.647 ************************************ 00:15:28.647 END TEST default_locks_via_rpc 00:15:28.647 ************************************ 00:15:28.647 00:15:28.647 real 0m4.135s 00:15:28.647 user 0m4.149s 00:15:28.647 sys 0m0.757s 00:15:28.647 11:06:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.647 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:15:28.647 11:06:36 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:28.647 11:06:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:28.647 11:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.647 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:15:28.647 ************************************ 00:15:28.647 START TEST non_locking_app_on_locked_coremask 00:15:28.647 ************************************ 00:15:28.647 11:06:36 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:15:28.647 11:06:36 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63079 00:15:28.647 11:06:36 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:28.647 11:06:36 -- event/cpu_locks.sh@81 -- # waitforlisten 63079 /var/tmp/spdk.sock 00:15:28.647 11:06:36 -- common/autotest_common.sh@817 -- # '[' -z 63079 ']' 00:15:28.647 11:06:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.647 11:06:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:28.647 11:06:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.647 11:06:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:28.647 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:15:28.647 [2024-04-18 11:06:36.599786] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:28.647 [2024-04-18 11:06:36.599953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:15:28.647 [2024-04-18 11:06:36.773690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.905 [2024-04-18 11:06:37.038741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:29.840 11:06:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:29.840 11:06:37 -- common/autotest_common.sh@850 -- # return 0 00:15:29.840 11:06:37 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63117 00:15:29.840 11:06:37 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:29.840 11:06:37 -- event/cpu_locks.sh@85 -- # waitforlisten 63117 /var/tmp/spdk2.sock 00:15:29.840 11:06:37 -- common/autotest_common.sh@817 -- # '[' -z 63117 ']' 00:15:29.840 11:06:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:29.840 11:06:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:29.840 11:06:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:29.840 11:06:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:29.840 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:15:29.840 [2024-04-18 11:06:38.013033] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:29.841 [2024-04-18 11:06:38.013506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63117 ] 00:15:30.099 [2024-04-18 11:06:38.199682] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:30.099 [2024-04-18 11:06:38.199754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.665 [2024-04-18 11:06:38.734990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.564 11:06:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:32.564 11:06:40 -- common/autotest_common.sh@850 -- # return 0 00:15:32.564 11:06:40 -- event/cpu_locks.sh@87 -- # locks_exist 63079 00:15:32.564 11:06:40 -- event/cpu_locks.sh@22 -- # lslocks -p 63079 00:15:32.564 11:06:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:33.498 11:06:41 -- event/cpu_locks.sh@89 -- # killprocess 63079 00:15:33.498 11:06:41 -- common/autotest_common.sh@936 -- # '[' -z 63079 ']' 00:15:33.498 11:06:41 -- common/autotest_common.sh@940 -- # kill -0 63079 00:15:33.498 11:06:41 -- common/autotest_common.sh@941 -- # uname 00:15:33.498 11:06:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.498 11:06:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63079 00:15:33.498 killing process with pid 63079 00:15:33.498 11:06:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.498 11:06:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.498 11:06:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63079' 00:15:33.498 11:06:41 -- common/autotest_common.sh@955 -- # kill 63079 00:15:33.498 11:06:41 -- common/autotest_common.sh@960 -- # wait 63079 00:15:38.765 11:06:46 -- event/cpu_locks.sh@90 -- # killprocess 63117 00:15:38.765 11:06:46 -- common/autotest_common.sh@936 -- # '[' -z 63117 ']' 00:15:38.765 11:06:46 -- common/autotest_common.sh@940 -- # kill -0 63117 00:15:38.765 11:06:46 -- common/autotest_common.sh@941 -- # uname 00:15:38.765 11:06:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.765 11:06:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63117 00:15:38.765 11:06:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.765 11:06:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.765 killing process with pid 63117 00:15:38.765 11:06:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63117' 00:15:38.765 11:06:46 -- common/autotest_common.sh@955 -- # kill 63117 00:15:38.765 11:06:46 -- common/autotest_common.sh@960 -- # wait 63117 00:15:40.663 00:15:40.663 real 0m11.996s 00:15:40.663 user 0m12.189s 00:15:40.663 sys 0m1.554s 00:15:40.663 11:06:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.663 ************************************ 00:15:40.663 END TEST non_locking_app_on_locked_coremask 00:15:40.663 ************************************ 00:15:40.663 11:06:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.663 11:06:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:40.663 11:06:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.663 11:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.663 11:06:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.663 ************************************ 00:15:40.663 START TEST locking_app_on_unlocked_coremask 00:15:40.663 ************************************ 00:15:40.663 11:06:48 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:15:40.663 11:06:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63280 00:15:40.663 11:06:48 -- event/cpu_locks.sh@99 -- # waitforlisten 63280 /var/tmp/spdk.sock 00:15:40.663 11:06:48 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:40.663 11:06:48 -- common/autotest_common.sh@817 -- # '[' -z 63280 ']' 00:15:40.663 11:06:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.663 11:06:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.663 11:06:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.663 11:06:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.663 11:06:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.663 [2024-04-18 11:06:48.730736] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:40.663 [2024-04-18 11:06:48.730923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:15:40.921 [2024-04-18 11:06:48.909815] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:40.921 [2024-04-18 11:06:48.909880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.178 [2024-04-18 11:06:49.194864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.115 11:06:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.115 11:06:50 -- common/autotest_common.sh@850 -- # return 0 00:15:42.115 11:06:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63314 00:15:42.115 11:06:50 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:42.115 11:06:50 -- event/cpu_locks.sh@103 -- # waitforlisten 63314 /var/tmp/spdk2.sock 00:15:42.115 11:06:50 -- common/autotest_common.sh@817 -- # '[' -z 63314 ']' 00:15:42.115 11:06:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:42.115 11:06:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:42.115 11:06:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:42.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:42.115 11:06:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:42.115 11:06:50 -- common/autotest_common.sh@10 -- # set +x 00:15:42.115 [2024-04-18 11:06:50.157547] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:42.115 [2024-04-18 11:06:50.158040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63314 ] 00:15:42.391 [2024-04-18 11:06:50.340281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.648 [2024-04-18 11:06:50.859969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.547 11:06:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.547 11:06:52 -- common/autotest_common.sh@850 -- # return 0 00:15:44.547 11:06:52 -- event/cpu_locks.sh@105 -- # locks_exist 63314 00:15:44.547 11:06:52 -- event/cpu_locks.sh@22 -- # lslocks -p 63314 00:15:44.547 11:06:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:45.113 11:06:53 -- event/cpu_locks.sh@107 -- # killprocess 63280 00:15:45.113 11:06:53 -- common/autotest_common.sh@936 -- # '[' -z 63280 ']' 00:15:45.113 11:06:53 -- common/autotest_common.sh@940 -- # kill -0 63280 00:15:45.113 11:06:53 -- common/autotest_common.sh@941 -- # uname 00:15:45.113 11:06:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.113 11:06:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63280 00:15:45.371 killing process with pid 63280 00:15:45.371 11:06:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.371 11:06:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.371 11:06:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63280' 00:15:45.371 11:06:53 -- common/autotest_common.sh@955 -- # kill 63280 00:15:45.371 11:06:53 -- common/autotest_common.sh@960 -- # wait 63280 00:15:50.641 11:06:57 -- event/cpu_locks.sh@108 -- # killprocess 63314 00:15:50.641 11:06:57 -- common/autotest_common.sh@936 -- # '[' -z 63314 ']' 00:15:50.641 11:06:57 -- common/autotest_common.sh@940 -- # kill -0 63314 00:15:50.641 11:06:57 -- common/autotest_common.sh@941 -- # uname 00:15:50.641 11:06:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.641 11:06:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63314 00:15:50.641 killing process with pid 63314 00:15:50.641 11:06:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.641 11:06:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.641 11:06:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63314' 00:15:50.641 11:06:57 -- common/autotest_common.sh@955 -- # kill 63314 00:15:50.641 11:06:57 -- common/autotest_common.sh@960 -- # wait 63314 00:15:52.541 ************************************ 00:15:52.541 END TEST locking_app_on_unlocked_coremask 00:15:52.541 ************************************ 00:15:52.541 00:15:52.541 real 0m11.704s 00:15:52.541 user 0m11.881s 00:15:52.541 sys 0m1.467s 00:15:52.541 11:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.541 11:07:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 11:07:00 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:52.541 11:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:52.541 11:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.541 11:07:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 ************************************ 00:15:52.541 START TEST locking_app_on_locked_coremask 00:15:52.541 ************************************ 00:15:52.541 11:07:00 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:15:52.541 11:07:00 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63477 00:15:52.541 11:07:00 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:52.541 11:07:00 -- event/cpu_locks.sh@116 -- # waitforlisten 63477 /var/tmp/spdk.sock 00:15:52.541 11:07:00 -- common/autotest_common.sh@817 -- # '[' -z 63477 ']' 00:15:52.541 11:07:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.541 11:07:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.541 11:07:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.541 11:07:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.541 11:07:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.541 [2024-04-18 11:07:00.543999] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:52.541 [2024-04-18 11:07:00.544471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63477 ] 00:15:52.541 [2024-04-18 11:07:00.720460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.799 [2024-04-18 11:07:00.993751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.735 11:07:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.735 11:07:01 -- common/autotest_common.sh@850 -- # return 0 00:15:53.735 11:07:01 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63505 00:15:53.735 11:07:01 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63505 /var/tmp/spdk2.sock 00:15:53.735 11:07:01 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:53.735 11:07:01 -- common/autotest_common.sh@638 -- # local es=0 00:15:53.735 11:07:01 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63505 /var/tmp/spdk2.sock 00:15:53.735 11:07:01 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:53.735 11:07:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:53.735 11:07:01 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:53.735 11:07:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:53.735 11:07:01 -- common/autotest_common.sh@641 -- # waitforlisten 63505 /var/tmp/spdk2.sock 00:15:53.735 11:07:01 -- common/autotest_common.sh@817 -- # '[' -z 63505 ']' 00:15:53.735 11:07:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:53.735 11:07:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.735 11:07:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:53.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:53.735 11:07:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.735 11:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:53.735 [2024-04-18 11:07:01.905503] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:53.735 [2024-04-18 11:07:01.905933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63505 ] 00:15:53.993 [2024-04-18 11:07:02.085398] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63477 has claimed it. 00:15:53.993 [2024-04-18 11:07:02.085499] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:54.559 ERROR: process (pid: 63505) is no longer running 00:15:54.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63505) - No such process 00:15:54.559 11:07:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.559 11:07:02 -- common/autotest_common.sh@850 -- # return 1 00:15:54.559 11:07:02 -- common/autotest_common.sh@641 -- # es=1 00:15:54.559 11:07:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:54.559 11:07:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:54.559 11:07:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:54.559 11:07:02 -- event/cpu_locks.sh@122 -- # locks_exist 63477 00:15:54.559 11:07:02 -- event/cpu_locks.sh@22 -- # lslocks -p 63477 00:15:54.559 11:07:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:54.817 11:07:02 -- event/cpu_locks.sh@124 -- # killprocess 63477 00:15:54.817 11:07:02 -- common/autotest_common.sh@936 -- # '[' -z 63477 ']' 00:15:54.817 11:07:02 -- common/autotest_common.sh@940 -- # kill -0 63477 00:15:54.817 11:07:02 -- common/autotest_common.sh@941 -- # uname 00:15:54.817 11:07:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:54.817 11:07:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63477 00:15:54.817 killing process with pid 63477 00:15:54.817 11:07:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:54.817 11:07:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:54.817 11:07:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63477' 00:15:54.817 11:07:02 -- common/autotest_common.sh@955 -- # kill 63477 00:15:54.817 11:07:02 -- common/autotest_common.sh@960 -- # wait 63477 00:15:57.348 00:15:57.348 real 0m4.820s 00:15:57.348 user 0m5.117s 00:15:57.348 sys 0m0.919s 00:15:57.348 11:07:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.348 11:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.348 ************************************ 00:15:57.348 END TEST locking_app_on_locked_coremask 00:15:57.348 ************************************ 00:15:57.348 11:07:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:57.348 11:07:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:57.348 11:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.348 11:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.348 ************************************ 00:15:57.348 START TEST locking_overlapped_coremask 00:15:57.348 ************************************ 00:15:57.348 11:07:05 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:15:57.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.348 11:07:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63590 00:15:57.348 11:07:05 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:57.348 11:07:05 -- event/cpu_locks.sh@133 -- # waitforlisten 63590 /var/tmp/spdk.sock 00:15:57.348 11:07:05 -- common/autotest_common.sh@817 -- # '[' -z 63590 ']' 00:15:57.348 11:07:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.348 11:07:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.348 11:07:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.348 11:07:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.348 11:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.348 [2024-04-18 11:07:05.485032] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:57.348 [2024-04-18 11:07:05.485498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63590 ] 00:15:57.606 [2024-04-18 11:07:05.658509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.864 [2024-04-18 11:07:05.945407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.864 [2024-04-18 11:07:05.945538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.864 [2024-04-18 11:07:05.945583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.799 11:07:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.799 11:07:06 -- common/autotest_common.sh@850 -- # return 0 00:15:58.799 11:07:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63620 00:15:58.799 11:07:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63620 /var/tmp/spdk2.sock 00:15:58.799 11:07:06 -- common/autotest_common.sh@638 -- # local es=0 00:15:58.799 11:07:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63620 /var/tmp/spdk2.sock 00:15:58.799 11:07:06 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:58.799 11:07:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:58.799 11:07:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:58.799 11:07:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:58.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:58.799 11:07:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:58.799 11:07:06 -- common/autotest_common.sh@641 -- # waitforlisten 63620 /var/tmp/spdk2.sock 00:15:58.799 11:07:06 -- common/autotest_common.sh@817 -- # '[' -z 63620 ']' 00:15:58.799 11:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:58.799 11:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:58.799 11:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:58.799 11:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:58.799 11:07:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.799 [2024-04-18 11:07:06.996712] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:58.799 [2024-04-18 11:07:06.996877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63620 ] 00:15:59.057 [2024-04-18 11:07:07.185516] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63590 has claimed it. 00:15:59.057 [2024-04-18 11:07:07.185781] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:59.625 ERROR: process (pid: 63620) is no longer running 00:15:59.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63620) - No such process 00:15:59.625 11:07:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:59.625 11:07:07 -- common/autotest_common.sh@850 -- # return 1 00:15:59.625 11:07:07 -- common/autotest_common.sh@641 -- # es=1 00:15:59.625 11:07:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:59.625 11:07:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:59.625 11:07:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:59.625 11:07:07 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:59.625 11:07:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:59.625 11:07:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:59.625 11:07:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:59.625 11:07:07 -- event/cpu_locks.sh@141 -- # killprocess 63590 00:15:59.625 11:07:07 -- common/autotest_common.sh@936 -- # '[' -z 63590 ']' 00:15:59.625 11:07:07 -- common/autotest_common.sh@940 -- # kill -0 63590 00:15:59.625 11:07:07 -- common/autotest_common.sh@941 -- # uname 00:15:59.625 11:07:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.625 11:07:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63590 00:15:59.625 killing process with pid 63590 00:15:59.625 11:07:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.625 11:07:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.625 11:07:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63590' 00:15:59.625 11:07:07 -- common/autotest_common.sh@955 -- # kill 63590 00:15:59.625 11:07:07 -- common/autotest_common.sh@960 -- # wait 63590 00:16:02.158 00:16:02.158 real 0m4.698s 00:16:02.158 user 0m12.063s 00:16:02.158 sys 0m0.793s 00:16:02.158 11:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:02.158 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.158 ************************************ 00:16:02.158 END TEST locking_overlapped_coremask 00:16:02.158 ************************************ 00:16:02.158 11:07:10 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:02.158 11:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:02.158 11:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.158 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.158 ************************************ 00:16:02.158 START TEST locking_overlapped_coremask_via_rpc 00:16:02.158 ************************************ 00:16:02.158 11:07:10 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:16:02.158 11:07:10 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63695 00:16:02.158 11:07:10 -- event/cpu_locks.sh@149 -- # waitforlisten 63695 /var/tmp/spdk.sock 00:16:02.158 11:07:10 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:02.158 11:07:10 -- common/autotest_common.sh@817 -- # '[' -z 63695 ']' 00:16:02.158 11:07:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.158 11:07:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:02.158 11:07:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.158 11:07:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:02.158 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.158 [2024-04-18 11:07:10.310177] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:02.158 [2024-04-18 11:07:10.310357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63695 ] 00:16:02.417 [2024-04-18 11:07:10.488026] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:02.417 [2024-04-18 11:07:10.488092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.674 [2024-04-18 11:07:10.799929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.674 [2024-04-18 11:07:10.800033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.674 [2024-04-18 11:07:10.800055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:03.608 11:07:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:03.608 11:07:11 -- common/autotest_common.sh@850 -- # return 0 00:16:03.608 11:07:11 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63736 00:16:03.608 11:07:11 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:03.608 11:07:11 -- event/cpu_locks.sh@153 -- # waitforlisten 63736 /var/tmp/spdk2.sock 00:16:03.608 11:07:11 -- common/autotest_common.sh@817 -- # '[' -z 63736 ']' 00:16:03.608 11:07:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:03.608 11:07:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:03.608 11:07:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:03.608 11:07:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:03.608 11:07:11 -- common/autotest_common.sh@10 -- # set +x 00:16:03.608 [2024-04-18 11:07:11.728702] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:03.608 [2024-04-18 11:07:11.729039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63736 ] 00:16:03.869 [2024-04-18 11:07:11.902140] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:03.869 [2024-04-18 11:07:11.902200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.435 [2024-04-18 11:07:12.394533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.435 [2024-04-18 11:07:12.397246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.435 [2024-04-18 11:07:12.397253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:05.810 11:07:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.810 11:07:13 -- common/autotest_common.sh@850 -- # return 0 00:16:05.810 11:07:13 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:05.810 11:07:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.810 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:16:05.810 11:07:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.810 11:07:13 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:05.810 11:07:13 -- common/autotest_common.sh@638 -- # local es=0 00:16:05.810 11:07:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:05.810 11:07:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:05.810 11:07:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:05.810 11:07:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:05.810 11:07:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:05.810 11:07:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:05.810 11:07:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.810 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:16:05.810 [2024-04-18 11:07:13.975358] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63695 has claimed it. 00:16:05.811 2024/04/18 11:07:13 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:16:05.811 request: 00:16:05.811 { 00:16:05.811 "method": "framework_enable_cpumask_locks", 00:16:05.811 "params": {} 00:16:05.811 } 00:16:05.811 Got JSON-RPC error response 00:16:05.811 GoRPCClient: error on JSON-RPC call 00:16:05.811 11:07:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:05.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.811 11:07:13 -- common/autotest_common.sh@641 -- # es=1 00:16:05.811 11:07:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:05.811 11:07:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:05.811 11:07:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:05.811 11:07:13 -- event/cpu_locks.sh@158 -- # waitforlisten 63695 /var/tmp/spdk.sock 00:16:05.811 11:07:13 -- common/autotest_common.sh@817 -- # '[' -z 63695 ']' 00:16:05.811 11:07:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.811 11:07:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.811 11:07:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.811 11:07:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.811 11:07:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.069 11:07:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.069 11:07:14 -- common/autotest_common.sh@850 -- # return 0 00:16:06.069 11:07:14 -- event/cpu_locks.sh@159 -- # waitforlisten 63736 /var/tmp/spdk2.sock 00:16:06.069 11:07:14 -- common/autotest_common.sh@817 -- # '[' -z 63736 ']' 00:16:06.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:06.069 11:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:06.069 11:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.069 11:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:06.069 11:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.069 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:16:06.635 11:07:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.635 11:07:14 -- common/autotest_common.sh@850 -- # return 0 00:16:06.635 11:07:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:06.635 11:07:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:06.635 11:07:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:06.635 11:07:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:06.635 00:16:06.635 real 0m4.416s 00:16:06.635 user 0m1.493s 00:16:06.635 sys 0m0.245s 00:16:06.635 11:07:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.635 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:16:06.635 ************************************ 00:16:06.635 END TEST locking_overlapped_coremask_via_rpc 00:16:06.635 ************************************ 00:16:06.635 11:07:14 -- event/cpu_locks.sh@174 -- # cleanup 00:16:06.635 11:07:14 -- event/cpu_locks.sh@15 -- # [[ -z 63695 ]] 00:16:06.635 11:07:14 -- event/cpu_locks.sh@15 -- # killprocess 63695 00:16:06.635 11:07:14 -- common/autotest_common.sh@936 -- # '[' -z 63695 ']' 00:16:06.635 11:07:14 -- common/autotest_common.sh@940 -- # kill -0 63695 00:16:06.635 11:07:14 -- common/autotest_common.sh@941 -- # uname 00:16:06.635 11:07:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.635 11:07:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63695 00:16:06.635 killing process with pid 63695 00:16:06.635 11:07:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.635 11:07:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.635 11:07:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63695' 00:16:06.635 11:07:14 -- common/autotest_common.sh@955 -- # kill 63695 00:16:06.635 11:07:14 -- common/autotest_common.sh@960 -- # wait 63695 00:16:09.165 11:07:17 -- event/cpu_locks.sh@16 -- # [[ -z 63736 ]] 00:16:09.165 11:07:17 -- event/cpu_locks.sh@16 -- # killprocess 63736 00:16:09.165 11:07:17 -- common/autotest_common.sh@936 -- # '[' -z 63736 ']' 00:16:09.165 11:07:17 -- common/autotest_common.sh@940 -- # kill -0 63736 00:16:09.165 11:07:17 -- common/autotest_common.sh@941 -- # uname 00:16:09.165 11:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.165 11:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63736 00:16:09.165 killing process with pid 63736 00:16:09.165 11:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:09.165 11:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:09.165 11:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63736' 00:16:09.165 11:07:17 -- common/autotest_common.sh@955 -- # kill 63736 00:16:09.165 11:07:17 -- common/autotest_common.sh@960 -- # wait 63736 00:16:11.695 11:07:19 -- event/cpu_locks.sh@18 -- # rm -f 00:16:11.695 11:07:19 -- event/cpu_locks.sh@1 -- # cleanup 00:16:11.695 11:07:19 -- event/cpu_locks.sh@15 -- # [[ -z 63695 ]] 00:16:11.695 11:07:19 -- event/cpu_locks.sh@15 -- # killprocess 63695 00:16:11.695 11:07:19 -- common/autotest_common.sh@936 -- # '[' -z 63695 ']' 00:16:11.695 11:07:19 -- common/autotest_common.sh@940 -- # kill -0 63695 00:16:11.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63695) - No such process 00:16:11.695 Process with pid 63695 is not found 00:16:11.695 11:07:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63695 is not found' 00:16:11.695 11:07:19 -- event/cpu_locks.sh@16 -- # [[ -z 63736 ]] 00:16:11.695 11:07:19 -- event/cpu_locks.sh@16 -- # killprocess 63736 00:16:11.695 11:07:19 -- common/autotest_common.sh@936 -- # '[' -z 63736 ']' 00:16:11.695 11:07:19 -- common/autotest_common.sh@940 -- # kill -0 63736 00:16:11.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63736) - No such process 00:16:11.695 Process with pid 63736 is not found 00:16:11.695 11:07:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63736 is not found' 00:16:11.695 11:07:19 -- event/cpu_locks.sh@18 -- # rm -f 00:16:11.695 00:16:11.695 real 0m51.558s 00:16:11.695 user 1m24.812s 00:16:11.695 sys 0m7.862s 00:16:11.695 11:07:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.695 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.695 ************************************ 00:16:11.695 END TEST cpu_locks 00:16:11.695 ************************************ 00:16:11.695 00:16:11.695 real 1m24.727s 00:16:11.695 user 2m29.224s 00:16:11.695 sys 0m12.425s 00:16:11.695 11:07:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.695 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.695 ************************************ 00:16:11.695 END TEST event 00:16:11.695 ************************************ 00:16:11.695 11:07:19 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:11.695 11:07:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:11.695 11:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.695 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.695 ************************************ 00:16:11.695 START TEST thread 00:16:11.695 ************************************ 00:16:11.695 11:07:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:11.695 * Looking for test storage... 00:16:11.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:11.695 11:07:19 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:11.695 11:07:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:11.695 11:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.695 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.695 ************************************ 00:16:11.695 START TEST thread_poller_perf 00:16:11.695 ************************************ 00:16:11.695 11:07:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:11.695 [2024-04-18 11:07:19.827767] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:11.695 [2024-04-18 11:07:19.827931] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63949 ] 00:16:11.953 [2024-04-18 11:07:19.999854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.211 [2024-04-18 11:07:20.316952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.211 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:13.585 ====================================== 00:16:13.585 busy:2215312122 (cyc) 00:16:13.585 total_run_count: 301000 00:16:13.585 tsc_hz: 2200000000 (cyc) 00:16:13.585 ====================================== 00:16:13.585 poller_cost: 7359 (cyc), 3345 (nsec) 00:16:13.585 00:16:13.585 real 0m1.947s 00:16:13.585 user 0m1.706s 00:16:13.585 sys 0m0.130s 00:16:13.585 11:07:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.585 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:13.585 ************************************ 00:16:13.585 END TEST thread_poller_perf 00:16:13.585 ************************************ 00:16:13.585 11:07:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:13.585 11:07:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:13.585 11:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.585 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:13.843 ************************************ 00:16:13.843 START TEST thread_poller_perf 00:16:13.843 ************************************ 00:16:13.843 11:07:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:13.843 [2024-04-18 11:07:21.897518] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:13.843 [2024-04-18 11:07:21.897715] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63995 ] 00:16:14.102 [2024-04-18 11:07:22.077542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.361 [2024-04-18 11:07:22.394686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.361 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:15.737 ====================================== 00:16:15.737 busy:2204408652 (cyc) 00:16:15.737 total_run_count: 3803000 00:16:15.737 tsc_hz: 2200000000 (cyc) 00:16:15.737 ====================================== 00:16:15.737 poller_cost: 579 (cyc), 263 (nsec) 00:16:15.737 00:16:15.737 real 0m1.943s 00:16:15.737 user 0m1.682s 00:16:15.737 sys 0m0.149s 00:16:15.737 11:07:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.737 11:07:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.737 ************************************ 00:16:15.737 END TEST thread_poller_perf 00:16:15.737 ************************************ 00:16:15.737 11:07:23 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:15.737 00:16:15.737 real 0m4.224s 00:16:15.737 user 0m3.511s 00:16:15.737 sys 0m0.456s 00:16:15.737 11:07:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.737 11:07:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.737 ************************************ 00:16:15.737 END TEST thread 00:16:15.737 ************************************ 00:16:15.737 11:07:23 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:15.737 11:07:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:15.737 11:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.737 11:07:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.995 ************************************ 00:16:15.995 START TEST accel 00:16:15.995 ************************************ 00:16:15.995 11:07:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:15.995 * Looking for test storage... 00:16:15.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:15.995 11:07:24 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:15.995 11:07:24 -- accel/accel.sh@82 -- # get_expected_opcs 00:16:15.995 11:07:24 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:15.995 11:07:24 -- accel/accel.sh@62 -- # spdk_tgt_pid=64081 00:16:15.995 11:07:24 -- accel/accel.sh@63 -- # waitforlisten 64081 00:16:15.995 11:07:24 -- common/autotest_common.sh@817 -- # '[' -z 64081 ']' 00:16:15.995 11:07:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.995 11:07:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.995 11:07:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.995 11:07:24 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.995 11:07:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.995 11:07:24 -- accel/accel.sh@61 -- # build_accel_config 00:16:15.995 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.995 11:07:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.995 11:07:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.995 11:07:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.995 11:07:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.995 11:07:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.995 11:07:24 -- accel/accel.sh@40 -- # local IFS=, 00:16:15.995 11:07:24 -- accel/accel.sh@41 -- # jq -r . 00:16:15.995 [2024-04-18 11:07:24.168500] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:15.995 [2024-04-18 11:07:24.168654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64081 ] 00:16:16.253 [2024-04-18 11:07:24.338598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.511 [2024-04-18 11:07:24.610966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.444 11:07:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.444 11:07:25 -- common/autotest_common.sh@850 -- # return 0 00:16:17.444 11:07:25 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:17.444 11:07:25 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:17.444 11:07:25 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:17.444 11:07:25 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:17.444 11:07:25 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:17.444 11:07:25 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:17.444 11:07:25 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:17.444 11:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.444 11:07:25 -- common/autotest_common.sh@10 -- # set +x 00:16:17.444 11:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # IFS== 00:16:17.444 11:07:25 -- accel/accel.sh@72 -- # read -r opc module 00:16:17.444 11:07:25 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:17.444 11:07:25 -- accel/accel.sh@75 -- # killprocess 64081 00:16:17.444 11:07:25 -- common/autotest_common.sh@936 -- # '[' -z 64081 ']' 00:16:17.444 11:07:25 -- common/autotest_common.sh@940 -- # kill -0 64081 00:16:17.444 11:07:25 -- common/autotest_common.sh@941 -- # uname 00:16:17.444 11:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.444 11:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64081 00:16:17.444 11:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:17.444 11:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:17.444 killing process with pid 64081 00:16:17.444 11:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64081' 00:16:17.444 11:07:25 -- common/autotest_common.sh@955 -- # kill 64081 00:16:17.444 11:07:25 -- common/autotest_common.sh@960 -- # wait 64081 00:16:19.974 11:07:28 -- accel/accel.sh@76 -- # trap - ERR 00:16:19.974 11:07:28 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:19.974 11:07:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.974 11:07:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.974 11:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:19.974 11:07:28 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:16:19.974 11:07:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:19.974 11:07:28 -- accel/accel.sh@12 -- # build_accel_config 00:16:19.974 11:07:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:19.974 11:07:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:19.975 11:07:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:19.975 11:07:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:19.975 11:07:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:19.975 11:07:28 -- accel/accel.sh@40 -- # local IFS=, 00:16:19.975 11:07:28 -- accel/accel.sh@41 -- # jq -r . 00:16:20.233 11:07:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.233 11:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:20.233 11:07:28 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:20.233 11:07:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:20.233 11:07:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.233 11:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:20.233 ************************************ 00:16:20.233 START TEST accel_missing_filename 00:16:20.233 ************************************ 00:16:20.233 11:07:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:16:20.233 11:07:28 -- common/autotest_common.sh@638 -- # local es=0 00:16:20.233 11:07:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:20.233 11:07:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:20.233 11:07:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:20.233 11:07:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:20.233 11:07:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:20.233 11:07:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:16:20.233 11:07:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:20.233 11:07:28 -- accel/accel.sh@12 -- # build_accel_config 00:16:20.233 11:07:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:20.233 11:07:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:20.233 11:07:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:20.233 11:07:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:20.233 11:07:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:20.233 11:07:28 -- accel/accel.sh@40 -- # local IFS=, 00:16:20.233 11:07:28 -- accel/accel.sh@41 -- # jq -r . 00:16:20.233 [2024-04-18 11:07:28.418514] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:20.233 [2024-04-18 11:07:28.418669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64193 ] 00:16:20.491 [2024-04-18 11:07:28.587876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.749 [2024-04-18 11:07:28.850378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.007 [2024-04-18 11:07:29.074708] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:21.626 [2024-04-18 11:07:29.589938] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:21.884 A filename is required. 00:16:21.884 11:07:30 -- common/autotest_common.sh@641 -- # es=234 00:16:21.884 11:07:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:21.884 11:07:30 -- common/autotest_common.sh@650 -- # es=106 00:16:21.884 11:07:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:21.884 11:07:30 -- common/autotest_common.sh@658 -- # es=1 00:16:21.884 11:07:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:21.884 00:16:21.884 real 0m1.647s 00:16:21.884 user 0m1.359s 00:16:21.884 sys 0m0.225s 00:16:21.884 11:07:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.884 11:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.884 ************************************ 00:16:21.884 END TEST accel_missing_filename 00:16:21.884 ************************************ 00:16:21.884 11:07:30 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:21.884 11:07:30 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:21.884 11:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.884 11:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:22.142 ************************************ 00:16:22.142 START TEST accel_compress_verify 00:16:22.142 ************************************ 00:16:22.142 11:07:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:22.142 11:07:30 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.142 11:07:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:22.142 11:07:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:22.142 11:07:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.142 11:07:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:22.142 11:07:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.142 11:07:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:22.142 11:07:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:22.142 11:07:30 -- accel/accel.sh@12 -- # build_accel_config 00:16:22.142 11:07:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.142 11:07:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.142 11:07:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.142 11:07:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.142 11:07:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.142 11:07:30 -- accel/accel.sh@40 -- # local IFS=, 00:16:22.142 11:07:30 -- accel/accel.sh@41 -- # jq -r . 00:16:22.142 [2024-04-18 11:07:30.208545] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:22.142 [2024-04-18 11:07:30.208793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64235 ] 00:16:22.401 [2024-04-18 11:07:30.392605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.659 [2024-04-18 11:07:30.758770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.917 [2024-04-18 11:07:30.995290] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.484 [2024-04-18 11:07:31.534210] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:23.742 00:16:23.742 Compression does not support the verify option, aborting. 00:16:23.742 11:07:31 -- common/autotest_common.sh@641 -- # es=161 00:16:23.742 11:07:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:23.742 11:07:31 -- common/autotest_common.sh@650 -- # es=33 00:16:23.742 11:07:31 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:23.742 11:07:31 -- common/autotest_common.sh@658 -- # es=1 00:16:23.742 11:07:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:23.742 00:16:23.742 real 0m1.805s 00:16:23.742 user 0m1.515s 00:16:23.742 sys 0m0.216s 00:16:23.742 11:07:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.742 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:16:23.742 ************************************ 00:16:23.742 END TEST accel_compress_verify 00:16:23.742 ************************************ 00:16:24.000 11:07:31 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:24.000 11:07:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:24.000 11:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.000 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:16:24.000 ************************************ 00:16:24.000 START TEST accel_wrong_workload 00:16:24.000 ************************************ 00:16:24.000 11:07:32 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:16:24.000 11:07:32 -- common/autotest_common.sh@638 -- # local es=0 00:16:24.000 11:07:32 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:24.000 11:07:32 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:24.000 11:07:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.000 11:07:32 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:24.000 11:07:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.000 11:07:32 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:16:24.000 11:07:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:24.000 11:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:24.000 11:07:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:24.000 11:07:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:24.000 11:07:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:24.000 11:07:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:24.000 11:07:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:24.000 11:07:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:24.000 11:07:32 -- accel/accel.sh@41 -- # jq -r . 00:16:24.000 Unsupported workload type: foobar 00:16:24.000 [2024-04-18 11:07:32.101927] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:24.000 accel_perf options: 00:16:24.000 [-h help message] 00:16:24.000 [-q queue depth per core] 00:16:24.000 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:24.000 [-T number of threads per core 00:16:24.000 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:24.000 [-t time in seconds] 00:16:24.000 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:24.000 [ dif_verify, , dif_generate, dif_generate_copy 00:16:24.000 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:24.000 [-l for compress/decompress workloads, name of uncompressed input file 00:16:24.000 [-S for crc32c workload, use this seed value (default 0) 00:16:24.000 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:24.000 [-f for fill workload, use this BYTE value (default 255) 00:16:24.000 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:24.000 [-y verify result if this switch is on] 00:16:24.000 [-a tasks to allocate per core (default: same value as -q)] 00:16:24.000 Can be used to spread operations across a wider range of memory. 00:16:24.000 11:07:32 -- common/autotest_common.sh@641 -- # es=1 00:16:24.000 11:07:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:24.000 11:07:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:24.000 11:07:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:24.000 00:16:24.000 real 0m0.071s 00:16:24.000 user 0m0.084s 00:16:24.000 sys 0m0.041s 00:16:24.000 11:07:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.000 ************************************ 00:16:24.000 END TEST accel_wrong_workload 00:16:24.000 ************************************ 00:16:24.000 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.000 11:07:32 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:24.000 11:07:32 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:24.000 11:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.000 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.259 ************************************ 00:16:24.259 START TEST accel_negative_buffers 00:16:24.259 ************************************ 00:16:24.259 11:07:32 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:24.259 11:07:32 -- common/autotest_common.sh@638 -- # local es=0 00:16:24.259 11:07:32 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:24.259 11:07:32 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:24.259 11:07:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.259 11:07:32 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:24.259 11:07:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.259 11:07:32 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:16:24.259 11:07:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:24.259 11:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:24.259 11:07:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:24.259 11:07:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:24.259 11:07:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:24.259 11:07:32 -- accel/accel.sh@41 -- # jq -r . 00:16:24.259 -x option must be non-negative. 00:16:24.259 [2024-04-18 11:07:32.306846] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:24.259 accel_perf options: 00:16:24.259 [-h help message] 00:16:24.259 [-q queue depth per core] 00:16:24.259 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:24.259 [-T number of threads per core 00:16:24.259 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:24.259 [-t time in seconds] 00:16:24.259 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:24.259 [ dif_verify, , dif_generate, dif_generate_copy 00:16:24.259 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:24.259 [-l for compress/decompress workloads, name of uncompressed input file 00:16:24.259 [-S for crc32c workload, use this seed value (default 0) 00:16:24.259 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:24.259 [-f for fill workload, use this BYTE value (default 255) 00:16:24.259 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:24.259 [-y verify result if this switch is on] 00:16:24.259 [-a tasks to allocate per core (default: same value as -q)] 00:16:24.259 Can be used to spread operations across a wider range of memory. 00:16:24.259 ************************************ 00:16:24.259 END TEST accel_negative_buffers 00:16:24.259 ************************************ 00:16:24.259 11:07:32 -- common/autotest_common.sh@641 -- # es=1 00:16:24.259 11:07:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:24.259 11:07:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:24.259 11:07:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:24.259 00:16:24.259 real 0m0.090s 00:16:24.259 user 0m0.093s 00:16:24.259 sys 0m0.047s 00:16:24.259 11:07:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.259 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.259 11:07:32 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:24.259 11:07:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:24.259 11:07:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.259 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.259 ************************************ 00:16:24.259 START TEST accel_crc32c 00:16:24.259 ************************************ 00:16:24.259 11:07:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:24.259 11:07:32 -- accel/accel.sh@16 -- # local accel_opc 00:16:24.259 11:07:32 -- accel/accel.sh@17 -- # local accel_module 00:16:24.259 11:07:32 -- accel/accel.sh@19 -- # IFS=: 00:16:24.259 11:07:32 -- accel/accel.sh@19 -- # read -r var val 00:16:24.259 11:07:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:24.259 11:07:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:24.259 11:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:24.259 11:07:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:24.259 11:07:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:24.259 11:07:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:24.259 11:07:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:24.259 11:07:32 -- accel/accel.sh@41 -- # jq -r . 00:16:24.518 [2024-04-18 11:07:32.512797] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:24.518 [2024-04-18 11:07:32.512968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64326 ] 00:16:24.518 [2024-04-18 11:07:32.688583] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.776 [2024-04-18 11:07:32.966806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.034 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.034 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.034 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.034 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.034 11:07:33 -- accel/accel.sh@20 -- # val=0x1 00:16:25.034 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.034 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=crc32c 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=32 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=software 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@22 -- # accel_module=software 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=32 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=32 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=1 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val=Yes 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:25.035 11:07:33 -- accel/accel.sh@20 -- # val= 00:16:25.035 11:07:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # IFS=: 00:16:25.035 11:07:33 -- accel/accel.sh@19 -- # read -r var val 00:16:26.934 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:26.934 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:26.934 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:26.934 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:26.934 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:26.934 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:26.934 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:26.934 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:26.934 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:26.934 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:26.934 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:27.191 11:07:35 -- accel/accel.sh@20 -- # val= 00:16:27.191 11:07:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.191 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:27.191 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:27.191 ************************************ 00:16:27.191 END TEST accel_crc32c 00:16:27.191 ************************************ 00:16:27.191 11:07:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:27.192 11:07:35 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:27.192 11:07:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:27.192 00:16:27.192 real 0m2.707s 00:16:27.192 user 0m2.376s 00:16:27.192 sys 0m0.235s 00:16:27.192 11:07:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.192 11:07:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.192 11:07:35 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:27.192 11:07:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:27.192 11:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.192 11:07:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.192 ************************************ 00:16:27.192 START TEST accel_crc32c_C2 00:16:27.192 ************************************ 00:16:27.192 11:07:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:27.192 11:07:35 -- accel/accel.sh@16 -- # local accel_opc 00:16:27.192 11:07:35 -- accel/accel.sh@17 -- # local accel_module 00:16:27.192 11:07:35 -- accel/accel.sh@19 -- # IFS=: 00:16:27.192 11:07:35 -- accel/accel.sh@19 -- # read -r var val 00:16:27.192 11:07:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:27.192 11:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:27.192 11:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:16:27.192 11:07:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:27.192 11:07:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:27.192 11:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:27.192 11:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:27.192 11:07:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:27.192 11:07:35 -- accel/accel.sh@40 -- # local IFS=, 00:16:27.192 11:07:35 -- accel/accel.sh@41 -- # jq -r . 00:16:27.192 [2024-04-18 11:07:35.335671] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:27.192 [2024-04-18 11:07:35.335817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64383 ] 00:16:27.449 [2024-04-18 11:07:35.499272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.707 [2024-04-18 11:07:35.776183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=0x1 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=crc32c 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=0 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=software 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@22 -- # accel_module=software 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=32 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=32 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=1 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val=Yes 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:27.965 11:07:36 -- accel/accel.sh@20 -- # val= 00:16:27.965 11:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # IFS=: 00:16:27.965 11:07:36 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@20 -- # val= 00:16:29.898 11:07:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:37 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:29.898 11:07:37 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:29.898 11:07:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:29.898 00:16:29.898 real 0m2.670s 00:16:29.898 user 0m2.350s 00:16:29.898 sys 0m0.221s 00:16:29.898 11:07:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:29.898 ************************************ 00:16:29.898 END TEST accel_crc32c_C2 00:16:29.898 ************************************ 00:16:29.898 11:07:37 -- common/autotest_common.sh@10 -- # set +x 00:16:29.898 11:07:37 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:29.898 11:07:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:29.898 11:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:29.898 11:07:38 -- common/autotest_common.sh@10 -- # set +x 00:16:29.898 ************************************ 00:16:29.898 START TEST accel_copy 00:16:29.898 ************************************ 00:16:29.898 11:07:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:16:29.898 11:07:38 -- accel/accel.sh@16 -- # local accel_opc 00:16:29.898 11:07:38 -- accel/accel.sh@17 -- # local accel_module 00:16:29.898 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:29.898 11:07:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:29.898 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:29.898 11:07:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:29.898 11:07:38 -- accel/accel.sh@12 -- # build_accel_config 00:16:29.898 11:07:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:29.898 11:07:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:29.898 11:07:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:29.898 11:07:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:29.898 11:07:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:29.898 11:07:38 -- accel/accel.sh@40 -- # local IFS=, 00:16:29.898 11:07:38 -- accel/accel.sh@41 -- # jq -r . 00:16:30.156 [2024-04-18 11:07:38.131561] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:30.156 [2024-04-18 11:07:38.131757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64439 ] 00:16:30.156 [2024-04-18 11:07:38.309694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.413 [2024-04-18 11:07:38.593364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=0x1 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=copy 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@23 -- # accel_opc=copy 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=software 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@22 -- # accel_module=software 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=32 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=32 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=1 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val=Yes 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:30.671 11:07:38 -- accel/accel.sh@20 -- # val= 00:16:30.671 11:07:38 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # IFS=: 00:16:30.671 11:07:38 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@20 -- # val= 00:16:32.567 11:07:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.567 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.567 11:07:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:32.567 11:07:40 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:32.567 11:07:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:32.567 00:16:32.567 real 0m2.680s 00:16:32.567 user 0m2.326s 00:16:32.567 sys 0m0.252s 00:16:32.567 11:07:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.567 ************************************ 00:16:32.567 END TEST accel_copy 00:16:32.567 ************************************ 00:16:32.567 11:07:40 -- common/autotest_common.sh@10 -- # set +x 00:16:32.824 11:07:40 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:32.824 11:07:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:32.824 11:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.824 11:07:40 -- common/autotest_common.sh@10 -- # set +x 00:16:32.824 ************************************ 00:16:32.824 START TEST accel_fill 00:16:32.824 ************************************ 00:16:32.824 11:07:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:32.824 11:07:40 -- accel/accel.sh@16 -- # local accel_opc 00:16:32.824 11:07:40 -- accel/accel.sh@17 -- # local accel_module 00:16:32.824 11:07:40 -- accel/accel.sh@19 -- # IFS=: 00:16:32.824 11:07:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:32.824 11:07:40 -- accel/accel.sh@19 -- # read -r var val 00:16:32.824 11:07:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:32.824 11:07:40 -- accel/accel.sh@12 -- # build_accel_config 00:16:32.824 11:07:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:32.824 11:07:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:32.824 11:07:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:32.824 11:07:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:32.824 11:07:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:32.824 11:07:40 -- accel/accel.sh@40 -- # local IFS=, 00:16:32.824 11:07:40 -- accel/accel.sh@41 -- # jq -r . 00:16:32.824 [2024-04-18 11:07:40.933735] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:32.824 [2024-04-18 11:07:40.933976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64490 ] 00:16:33.081 [2024-04-18 11:07:41.109886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.339 [2024-04-18 11:07:41.418371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=0x1 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=fill 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@23 -- # accel_opc=fill 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=0x80 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=software 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@22 -- # accel_module=software 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=64 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=64 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=1 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val=Yes 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:33.596 11:07:41 -- accel/accel.sh@20 -- # val= 00:16:33.596 11:07:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # IFS=: 00:16:33.596 11:07:41 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@20 -- # val= 00:16:35.501 11:07:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:35.501 11:07:43 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:35.501 11:07:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:35.501 00:16:35.501 real 0m2.721s 00:16:35.501 user 0m2.362s 00:16:35.501 sys 0m0.258s 00:16:35.501 11:07:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.501 ************************************ 00:16:35.501 11:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:35.501 END TEST accel_fill 00:16:35.501 ************************************ 00:16:35.501 11:07:43 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:35.501 11:07:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:35.501 11:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.501 11:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:35.501 ************************************ 00:16:35.501 START TEST accel_copy_crc32c 00:16:35.501 ************************************ 00:16:35.501 11:07:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:16:35.501 11:07:43 -- accel/accel.sh@16 -- # local accel_opc 00:16:35.501 11:07:43 -- accel/accel.sh@17 -- # local accel_module 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # IFS=: 00:16:35.501 11:07:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:35.501 11:07:43 -- accel/accel.sh@19 -- # read -r var val 00:16:35.501 11:07:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:16:35.501 11:07:43 -- accel/accel.sh@12 -- # build_accel_config 00:16:35.501 11:07:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:35.501 11:07:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:35.501 11:07:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:35.501 11:07:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:35.501 11:07:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:35.501 11:07:43 -- accel/accel.sh@40 -- # local IFS=, 00:16:35.501 11:07:43 -- accel/accel.sh@41 -- # jq -r . 00:16:35.758 [2024-04-18 11:07:43.748799] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:35.758 [2024-04-18 11:07:43.748986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64540 ] 00:16:35.758 [2024-04-18 11:07:43.921071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.016 [2024-04-18 11:07:44.188337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=0x1 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=0 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=software 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@22 -- # accel_module=software 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=32 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=32 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=1 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val=Yes 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:36.273 11:07:44 -- accel/accel.sh@20 -- # val= 00:16:36.273 11:07:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # IFS=: 00:16:36.273 11:07:44 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@20 -- # val= 00:16:38.186 11:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.186 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.186 11:07:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:38.186 11:07:46 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:38.186 11:07:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:38.186 00:16:38.186 real 0m2.673s 00:16:38.186 user 0m2.353s 00:16:38.186 sys 0m0.223s 00:16:38.186 11:07:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.186 11:07:46 -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 ************************************ 00:16:38.186 END TEST accel_copy_crc32c 00:16:38.186 ************************************ 00:16:38.445 11:07:46 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:38.445 11:07:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:38.445 11:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.445 11:07:46 -- common/autotest_common.sh@10 -- # set +x 00:16:38.445 ************************************ 00:16:38.445 START TEST accel_copy_crc32c_C2 00:16:38.445 ************************************ 00:16:38.445 11:07:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:38.445 11:07:46 -- accel/accel.sh@16 -- # local accel_opc 00:16:38.445 11:07:46 -- accel/accel.sh@17 -- # local accel_module 00:16:38.445 11:07:46 -- accel/accel.sh@19 -- # IFS=: 00:16:38.445 11:07:46 -- accel/accel.sh@19 -- # read -r var val 00:16:38.445 11:07:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:38.445 11:07:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:16:38.445 11:07:46 -- accel/accel.sh@12 -- # build_accel_config 00:16:38.445 11:07:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:38.445 11:07:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:38.445 11:07:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:38.445 11:07:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:38.445 11:07:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:38.445 11:07:46 -- accel/accel.sh@40 -- # local IFS=, 00:16:38.445 11:07:46 -- accel/accel.sh@41 -- # jq -r . 00:16:38.445 [2024-04-18 11:07:46.554232] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:38.445 [2024-04-18 11:07:46.554450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64595 ] 00:16:38.704 [2024-04-18 11:07:46.745065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.962 [2024-04-18 11:07:47.048403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=0x1 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=0 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=software 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@22 -- # accel_module=software 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=32 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=32 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=1 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val=Yes 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:39.220 11:07:47 -- accel/accel.sh@20 -- # val= 00:16:39.220 11:07:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # IFS=: 00:16:39.220 11:07:47 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@20 -- # val= 00:16:41.117 11:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 ************************************ 00:16:41.117 END TEST accel_copy_crc32c_C2 00:16:41.117 ************************************ 00:16:41.117 11:07:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:41.117 11:07:49 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:41.117 11:07:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:41.117 00:16:41.117 real 0m2.724s 00:16:41.117 user 0m2.368s 00:16:41.117 sys 0m0.251s 00:16:41.117 11:07:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.117 11:07:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.117 11:07:49 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:41.117 11:07:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:41.117 11:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.117 11:07:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.117 ************************************ 00:16:41.117 START TEST accel_dualcast 00:16:41.117 ************************************ 00:16:41.117 11:07:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:16:41.117 11:07:49 -- accel/accel.sh@16 -- # local accel_opc 00:16:41.117 11:07:49 -- accel/accel.sh@17 -- # local accel_module 00:16:41.117 11:07:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # IFS=: 00:16:41.117 11:07:49 -- accel/accel.sh@19 -- # read -r var val 00:16:41.117 11:07:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:16:41.118 11:07:49 -- accel/accel.sh@12 -- # build_accel_config 00:16:41.118 11:07:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:41.118 11:07:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:41.118 11:07:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:41.118 11:07:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:41.118 11:07:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:41.118 11:07:49 -- accel/accel.sh@40 -- # local IFS=, 00:16:41.118 11:07:49 -- accel/accel.sh@41 -- # jq -r . 00:16:41.375 [2024-04-18 11:07:49.376694] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:41.375 [2024-04-18 11:07:49.376868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64647 ] 00:16:41.375 [2024-04-18 11:07:49.549166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.633 [2024-04-18 11:07:49.821537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.890 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.890 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.890 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.890 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.890 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.890 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.890 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.890 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.890 11:07:50 -- accel/accel.sh@20 -- # val=0x1 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=dualcast 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=software 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@22 -- # accel_module=software 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=32 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=32 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=1 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val=Yes 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:41.891 11:07:50 -- accel/accel.sh@20 -- # val= 00:16:41.891 11:07:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # IFS=: 00:16:41.891 11:07:50 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@20 -- # val= 00:16:43.790 11:07:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # IFS=: 00:16:43.790 11:07:51 -- accel/accel.sh@19 -- # read -r var val 00:16:43.790 11:07:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:43.791 11:07:51 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:43.791 11:07:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.791 00:16:43.791 real 0m2.655s 00:16:43.791 user 0m2.324s 00:16:43.791 sys 0m0.234s 00:16:43.791 ************************************ 00:16:43.791 END TEST accel_dualcast 00:16:43.791 ************************************ 00:16:43.791 11:07:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.791 11:07:51 -- common/autotest_common.sh@10 -- # set +x 00:16:44.048 11:07:52 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:44.048 11:07:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:44.048 11:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.048 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:16:44.048 ************************************ 00:16:44.048 START TEST accel_compare 00:16:44.048 ************************************ 00:16:44.048 11:07:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:16:44.048 11:07:52 -- accel/accel.sh@16 -- # local accel_opc 00:16:44.048 11:07:52 -- accel/accel.sh@17 -- # local accel_module 00:16:44.048 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.048 11:07:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:44.048 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.048 11:07:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:16:44.048 11:07:52 -- accel/accel.sh@12 -- # build_accel_config 00:16:44.048 11:07:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.048 11:07:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.048 11:07:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.048 11:07:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.048 11:07:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.048 11:07:52 -- accel/accel.sh@40 -- # local IFS=, 00:16:44.048 11:07:52 -- accel/accel.sh@41 -- # jq -r . 00:16:44.048 [2024-04-18 11:07:52.160771] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:44.048 [2024-04-18 11:07:52.160977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64704 ] 00:16:44.305 [2024-04-18 11:07:52.339627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.562 [2024-04-18 11:07:52.606919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=0x1 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=compare 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@23 -- # accel_opc=compare 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=software 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@22 -- # accel_module=software 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=32 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=32 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=1 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val=Yes 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:44.820 11:07:52 -- accel/accel.sh@20 -- # val= 00:16:44.820 11:07:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # IFS=: 00:16:44.820 11:07:52 -- accel/accel.sh@19 -- # read -r var val 00:16:46.718 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@20 -- # val= 00:16:46.719 11:07:54 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:46.719 11:07:54 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:46.719 11:07:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:46.719 00:16:46.719 real 0m2.664s 00:16:46.719 user 0m2.324s 00:16:46.719 sys 0m0.244s 00:16:46.719 11:07:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.719 ************************************ 00:16:46.719 END TEST accel_compare 00:16:46.719 ************************************ 00:16:46.719 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:46.719 11:07:54 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:46.719 11:07:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:46.719 11:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.719 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:46.719 ************************************ 00:16:46.719 START TEST accel_xor 00:16:46.719 ************************************ 00:16:46.719 11:07:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:16:46.719 11:07:54 -- accel/accel.sh@16 -- # local accel_opc 00:16:46.719 11:07:54 -- accel/accel.sh@17 -- # local accel_module 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # IFS=: 00:16:46.719 11:07:54 -- accel/accel.sh@19 -- # read -r var val 00:16:46.719 11:07:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:46.719 11:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:16:46.719 11:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:16:46.719 11:07:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:46.719 11:07:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:46.719 11:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:46.719 11:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:46.719 11:07:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:46.719 11:07:54 -- accel/accel.sh@40 -- # local IFS=, 00:16:46.719 11:07:54 -- accel/accel.sh@41 -- # jq -r . 00:16:46.719 [2024-04-18 11:07:54.934645] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:46.719 [2024-04-18 11:07:54.934801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64749 ] 00:16:46.976 [2024-04-18 11:07:55.100446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.234 [2024-04-18 11:07:55.367553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val=0x1 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.491 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.491 11:07:55 -- accel/accel.sh@20 -- # val=xor 00:16:47.491 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=2 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=software 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@22 -- # accel_module=software 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=32 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=32 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=1 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val=Yes 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:47.492 11:07:55 -- accel/accel.sh@20 -- # val= 00:16:47.492 11:07:55 -- accel/accel.sh@21 -- # case "$var" in 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # IFS=: 00:16:47.492 11:07:55 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@20 -- # val= 00:16:49.391 11:07:57 -- accel/accel.sh@21 -- # case "$var" in 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.391 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.391 11:07:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:49.391 11:07:57 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:49.391 11:07:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:49.391 00:16:49.391 real 0m2.651s 00:16:49.391 user 0m2.340s 00:16:49.391 sys 0m0.210s 00:16:49.391 11:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.391 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:16:49.391 ************************************ 00:16:49.391 END TEST accel_xor 00:16:49.391 ************************************ 00:16:49.391 11:07:57 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:49.391 11:07:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:49.391 11:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.391 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:16:49.649 ************************************ 00:16:49.649 START TEST accel_xor 00:16:49.649 ************************************ 00:16:49.649 11:07:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:16:49.649 11:07:57 -- accel/accel.sh@16 -- # local accel_opc 00:16:49.649 11:07:57 -- accel/accel.sh@17 -- # local accel_module 00:16:49.649 11:07:57 -- accel/accel.sh@19 -- # IFS=: 00:16:49.649 11:07:57 -- accel/accel.sh@19 -- # read -r var val 00:16:49.649 11:07:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:49.649 11:07:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:49.649 11:07:57 -- accel/accel.sh@12 -- # build_accel_config 00:16:49.649 11:07:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:49.649 11:07:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:49.649 11:07:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:49.649 11:07:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:49.649 11:07:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:49.649 11:07:57 -- accel/accel.sh@40 -- # local IFS=, 00:16:49.649 11:07:57 -- accel/accel.sh@41 -- # jq -r . 00:16:49.649 [2024-04-18 11:07:57.714576] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:49.649 [2024-04-18 11:07:57.714780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64805 ] 00:16:49.907 [2024-04-18 11:07:57.888957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.165 [2024-04-18 11:07:58.160802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=0x1 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=xor 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=3 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=software 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@22 -- # accel_module=software 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=32 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=32 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=1 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val=Yes 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:50.422 11:07:58 -- accel/accel.sh@20 -- # val= 00:16:50.422 11:07:58 -- accel/accel.sh@21 -- # case "$var" in 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # IFS=: 00:16:50.422 11:07:58 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@20 -- # val= 00:16:52.321 11:08:00 -- accel/accel.sh@21 -- # case "$var" in 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:52.321 11:08:00 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:52.321 11:08:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:52.321 00:16:52.321 real 0m2.749s 00:16:52.321 user 0m2.404s 00:16:52.321 sys 0m0.246s 00:16:52.321 11:08:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.321 11:08:00 -- common/autotest_common.sh@10 -- # set +x 00:16:52.321 ************************************ 00:16:52.321 END TEST accel_xor 00:16:52.321 ************************************ 00:16:52.321 11:08:00 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:52.321 11:08:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:52.321 11:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.321 11:08:00 -- common/autotest_common.sh@10 -- # set +x 00:16:52.321 ************************************ 00:16:52.321 START TEST accel_dif_verify 00:16:52.321 ************************************ 00:16:52.321 11:08:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:16:52.321 11:08:00 -- accel/accel.sh@16 -- # local accel_opc 00:16:52.321 11:08:00 -- accel/accel.sh@17 -- # local accel_module 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # IFS=: 00:16:52.321 11:08:00 -- accel/accel.sh@19 -- # read -r var val 00:16:52.321 11:08:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:52.321 11:08:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:52.321 11:08:00 -- accel/accel.sh@12 -- # build_accel_config 00:16:52.321 11:08:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:52.321 11:08:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:52.321 11:08:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:52.321 11:08:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:52.321 11:08:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:52.321 11:08:00 -- accel/accel.sh@40 -- # local IFS=, 00:16:52.321 11:08:00 -- accel/accel.sh@41 -- # jq -r . 00:16:52.635 [2024-04-18 11:08:00.561758] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:52.636 [2024-04-18 11:08:00.561897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64862 ] 00:16:52.636 [2024-04-18 11:08:00.725698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.893 [2024-04-18 11:08:01.018496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=0x1 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=dif_verify 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=software 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@22 -- # accel_module=software 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=32 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=32 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=1 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val=No 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:53.150 11:08:01 -- accel/accel.sh@20 -- # val= 00:16:53.150 11:08:01 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # IFS=: 00:16:53.150 11:08:01 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 11:08:03 -- accel/accel.sh@20 -- # val= 00:16:55.057 11:08:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.057 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.057 ************************************ 00:16:55.057 END TEST accel_dif_verify 00:16:55.057 ************************************ 00:16:55.057 11:08:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:55.057 11:08:03 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:55.057 11:08:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:55.057 00:16:55.057 real 0m2.668s 00:16:55.057 user 0m2.337s 00:16:55.057 sys 0m0.235s 00:16:55.057 11:08:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:55.057 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:16:55.057 11:08:03 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:55.057 11:08:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:55.057 11:08:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.057 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 ************************************ 00:16:55.323 START TEST accel_dif_generate 00:16:55.323 ************************************ 00:16:55.323 11:08:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:16:55.323 11:08:03 -- accel/accel.sh@16 -- # local accel_opc 00:16:55.323 11:08:03 -- accel/accel.sh@17 -- # local accel_module 00:16:55.323 11:08:03 -- accel/accel.sh@19 -- # IFS=: 00:16:55.323 11:08:03 -- accel/accel.sh@19 -- # read -r var val 00:16:55.323 11:08:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:55.323 11:08:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:55.323 11:08:03 -- accel/accel.sh@12 -- # build_accel_config 00:16:55.323 11:08:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:55.323 11:08:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:55.323 11:08:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:55.323 11:08:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:55.323 11:08:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:55.323 11:08:03 -- accel/accel.sh@40 -- # local IFS=, 00:16:55.323 11:08:03 -- accel/accel.sh@41 -- # jq -r . 00:16:55.323 [2024-04-18 11:08:03.357869] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:55.323 [2024-04-18 11:08:03.358043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64913 ] 00:16:55.323 [2024-04-18 11:08:03.535531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.888 [2024-04-18 11:08:03.832237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val=0x1 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val=dif_generate 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.888 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.888 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.888 11:08:04 -- accel/accel.sh@20 -- # val=software 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@22 -- # accel_module=software 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val=32 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val=32 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val=1 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val=No 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:55.889 11:08:04 -- accel/accel.sh@20 -- # val= 00:16:55.889 11:08:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # IFS=: 00:16:55.889 11:08:04 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@20 -- # val= 00:16:57.785 11:08:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # IFS=: 00:16:57.785 11:08:05 -- accel/accel.sh@19 -- # read -r var val 00:16:57.785 11:08:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:57.785 11:08:05 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:57.785 ************************************ 00:16:57.785 END TEST accel_dif_generate 00:16:57.785 ************************************ 00:16:57.785 11:08:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:57.785 00:16:57.785 real 0m2.694s 00:16:57.785 user 0m2.363s 00:16:57.785 sys 0m0.235s 00:16:57.785 11:08:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:57.785 11:08:05 -- common/autotest_common.sh@10 -- # set +x 00:16:58.042 11:08:06 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:58.042 11:08:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:58.042 11:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.043 11:08:06 -- common/autotest_common.sh@10 -- # set +x 00:16:58.043 ************************************ 00:16:58.043 START TEST accel_dif_generate_copy 00:16:58.043 ************************************ 00:16:58.043 11:08:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:16:58.043 11:08:06 -- accel/accel.sh@16 -- # local accel_opc 00:16:58.043 11:08:06 -- accel/accel.sh@17 -- # local accel_module 00:16:58.043 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.043 11:08:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:58.043 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.043 11:08:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:58.043 11:08:06 -- accel/accel.sh@12 -- # build_accel_config 00:16:58.043 11:08:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:58.043 11:08:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:58.043 11:08:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:58.043 11:08:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:58.043 11:08:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:58.043 11:08:06 -- accel/accel.sh@40 -- # local IFS=, 00:16:58.043 11:08:06 -- accel/accel.sh@41 -- # jq -r . 00:16:58.043 [2024-04-18 11:08:06.159333] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:58.043 [2024-04-18 11:08:06.159530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64964 ] 00:16:58.299 [2024-04-18 11:08:06.332679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.557 [2024-04-18 11:08:06.575045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=0x1 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=software 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@22 -- # accel_module=software 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=32 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=32 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=1 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val=No 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:16:58.815 11:08:06 -- accel/accel.sh@20 -- # val= 00:16:58.815 11:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # IFS=: 00:16:58.815 11:08:06 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@20 -- # val= 00:17:00.711 11:08:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.711 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.711 11:08:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:00.711 11:08:08 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:17:00.711 11:08:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:00.711 00:17:00.711 real 0m2.564s 00:17:00.711 user 0m2.262s 00:17:00.711 sys 0m0.203s 00:17:00.711 11:08:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.711 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:17:00.711 ************************************ 00:17:00.711 END TEST accel_dif_generate_copy 00:17:00.711 ************************************ 00:17:00.711 11:08:08 -- accel/accel.sh@115 -- # [[ y == y ]] 00:17:00.711 11:08:08 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:00.711 11:08:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:00.711 11:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.711 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:17:00.711 ************************************ 00:17:00.711 START TEST accel_comp 00:17:00.711 ************************************ 00:17:00.711 11:08:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:00.711 11:08:08 -- accel/accel.sh@16 -- # local accel_opc 00:17:00.712 11:08:08 -- accel/accel.sh@17 -- # local accel_module 00:17:00.712 11:08:08 -- accel/accel.sh@19 -- # IFS=: 00:17:00.712 11:08:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:00.712 11:08:08 -- accel/accel.sh@19 -- # read -r var val 00:17:00.712 11:08:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:00.712 11:08:08 -- accel/accel.sh@12 -- # build_accel_config 00:17:00.712 11:08:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:00.712 11:08:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:00.712 11:08:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:00.712 11:08:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:00.712 11:08:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:00.712 11:08:08 -- accel/accel.sh@40 -- # local IFS=, 00:17:00.712 11:08:08 -- accel/accel.sh@41 -- # jq -r . 00:17:00.712 [2024-04-18 11:08:08.826718] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:00.712 [2024-04-18 11:08:08.826888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65015 ] 00:17:00.969 [2024-04-18 11:08:08.998060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.227 [2024-04-18 11:08:09.288944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=0x1 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=compress 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@23 -- # accel_opc=compress 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=software 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@22 -- # accel_module=software 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=32 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=32 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=1 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val=No 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:01.486 11:08:09 -- accel/accel.sh@20 -- # val= 00:17:01.486 11:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # IFS=: 00:17:01.486 11:08:09 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@20 -- # val= 00:17:03.386 11:08:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:03.386 11:08:11 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:17:03.386 11:08:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:03.386 00:17:03.386 real 0m2.650s 00:17:03.386 user 0m2.350s 00:17:03.386 sys 0m0.199s 00:17:03.386 11:08:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:03.386 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 ************************************ 00:17:03.386 END TEST accel_comp 00:17:03.386 ************************************ 00:17:03.386 11:08:11 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:03.386 11:08:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:03.386 11:08:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.386 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:17:03.386 ************************************ 00:17:03.386 START TEST accel_decomp 00:17:03.386 ************************************ 00:17:03.386 11:08:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:03.386 11:08:11 -- accel/accel.sh@16 -- # local accel_opc 00:17:03.386 11:08:11 -- accel/accel.sh@17 -- # local accel_module 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # IFS=: 00:17:03.386 11:08:11 -- accel/accel.sh@19 -- # read -r var val 00:17:03.386 11:08:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:03.386 11:08:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:03.386 11:08:11 -- accel/accel.sh@12 -- # build_accel_config 00:17:03.386 11:08:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:03.386 11:08:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:03.386 11:08:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:03.386 11:08:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:03.386 11:08:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:03.386 11:08:11 -- accel/accel.sh@40 -- # local IFS=, 00:17:03.386 11:08:11 -- accel/accel.sh@41 -- # jq -r . 00:17:03.386 [2024-04-18 11:08:11.600951] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:03.386 [2024-04-18 11:08:11.601169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65071 ] 00:17:03.644 [2024-04-18 11:08:11.778766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.904 [2024-04-18 11:08:12.068626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val=0x1 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.163 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.163 11:08:12 -- accel/accel.sh@20 -- # val=decompress 00:17:04.163 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=software 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@22 -- # accel_module=software 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=32 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=32 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=1 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val=Yes 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:04.164 11:08:12 -- accel/accel.sh@20 -- # val= 00:17:04.164 11:08:12 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # IFS=: 00:17:04.164 11:08:12 -- accel/accel.sh@19 -- # read -r var val 00:17:06.136 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.136 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.136 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.136 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.136 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.136 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.136 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.136 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.136 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.137 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.137 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.137 11:08:14 -- accel/accel.sh@20 -- # val= 00:17:06.137 11:08:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.137 11:08:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:06.137 11:08:14 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:06.137 ************************************ 00:17:06.137 END TEST accel_decomp 00:17:06.137 ************************************ 00:17:06.137 11:08:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:06.137 00:17:06.137 real 0m2.664s 00:17:06.137 user 0m2.363s 00:17:06.137 sys 0m0.200s 00:17:06.137 11:08:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.137 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:17:06.137 11:08:14 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:06.137 11:08:14 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:06.137 11:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.137 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:17:06.137 ************************************ 00:17:06.137 START TEST accel_decmop_full 00:17:06.137 ************************************ 00:17:06.137 11:08:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:06.137 11:08:14 -- accel/accel.sh@16 -- # local accel_opc 00:17:06.137 11:08:14 -- accel/accel.sh@17 -- # local accel_module 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # IFS=: 00:17:06.137 11:08:14 -- accel/accel.sh@19 -- # read -r var val 00:17:06.137 11:08:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:06.137 11:08:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:06.137 11:08:14 -- accel/accel.sh@12 -- # build_accel_config 00:17:06.137 11:08:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:06.137 11:08:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:06.137 11:08:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:06.137 11:08:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:06.137 11:08:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:06.137 11:08:14 -- accel/accel.sh@40 -- # local IFS=, 00:17:06.137 11:08:14 -- accel/accel.sh@41 -- # jq -r . 00:17:06.395 [2024-04-18 11:08:14.379515] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:06.395 [2024-04-18 11:08:14.379709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65121 ] 00:17:06.395 [2024-04-18 11:08:14.558189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.654 [2024-04-18 11:08:14.861869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=0x1 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=decompress 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=software 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@22 -- # accel_module=software 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=32 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=32 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.912 11:08:15 -- accel/accel.sh@20 -- # val=1 00:17:06.912 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.912 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.913 11:08:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:06.913 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.913 11:08:15 -- accel/accel.sh@20 -- # val=Yes 00:17:06.913 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.913 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.913 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:06.913 11:08:15 -- accel/accel.sh@20 -- # val= 00:17:06.913 11:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # IFS=: 00:17:06.913 11:08:15 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:16 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:08.813 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:08.813 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:08.813 11:08:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:08.813 11:08:17 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:08.813 11:08:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:08.813 00:17:08.813 real 0m2.686s 00:17:08.813 user 0m2.389s 00:17:08.813 sys 0m0.201s 00:17:08.813 ************************************ 00:17:08.813 END TEST accel_decmop_full 00:17:08.813 ************************************ 00:17:08.813 11:08:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.813 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:09.071 11:08:17 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:09.071 11:08:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:09.071 11:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.071 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:09.071 ************************************ 00:17:09.071 START TEST accel_decomp_mcore 00:17:09.071 ************************************ 00:17:09.071 11:08:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:09.071 11:08:17 -- accel/accel.sh@16 -- # local accel_opc 00:17:09.071 11:08:17 -- accel/accel.sh@17 -- # local accel_module 00:17:09.071 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.071 11:08:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:09.071 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.071 11:08:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:09.071 11:08:17 -- accel/accel.sh@12 -- # build_accel_config 00:17:09.071 11:08:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:09.071 11:08:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:09.071 11:08:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:09.071 11:08:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:09.071 11:08:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:09.071 11:08:17 -- accel/accel.sh@40 -- # local IFS=, 00:17:09.071 11:08:17 -- accel/accel.sh@41 -- # jq -r . 00:17:09.071 [2024-04-18 11:08:17.185094] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:09.071 [2024-04-18 11:08:17.185302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65172 ] 00:17:09.329 [2024-04-18 11:08:17.359198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.588 [2024-04-18 11:08:17.612039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.588 [2024-04-18 11:08:17.612193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.588 [2024-04-18 11:08:17.612693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.588 [2024-04-18 11:08:17.612696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=0xf 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=decompress 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=software 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@22 -- # accel_module=software 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=32 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=32 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=1 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val=Yes 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.847 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.847 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.847 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.848 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.848 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:09.848 11:08:17 -- accel/accel.sh@20 -- # val= 00:17:09.848 11:08:17 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.848 11:08:17 -- accel/accel.sh@19 -- # IFS=: 00:17:09.848 11:08:17 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@20 -- # val= 00:17:11.750 11:08:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:11.750 11:08:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:11.750 11:08:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:11.750 00:17:11.750 real 0m2.645s 00:17:11.750 user 0m7.449s 00:17:11.750 sys 0m0.251s 00:17:11.750 11:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.750 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:17:11.750 ************************************ 00:17:11.750 END TEST accel_decomp_mcore 00:17:11.750 ************************************ 00:17:11.750 11:08:19 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:11.750 11:08:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:11.750 11:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.750 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:17:11.750 ************************************ 00:17:11.750 START TEST accel_decomp_full_mcore 00:17:11.750 ************************************ 00:17:11.750 11:08:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:11.750 11:08:19 -- accel/accel.sh@16 -- # local accel_opc 00:17:11.750 11:08:19 -- accel/accel.sh@17 -- # local accel_module 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # IFS=: 00:17:11.750 11:08:19 -- accel/accel.sh@19 -- # read -r var val 00:17:11.750 11:08:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:11.750 11:08:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:11.750 11:08:19 -- accel/accel.sh@12 -- # build_accel_config 00:17:11.750 11:08:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:11.750 11:08:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:11.750 11:08:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:11.750 11:08:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:11.750 11:08:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:11.750 11:08:19 -- accel/accel.sh@40 -- # local IFS=, 00:17:11.750 11:08:19 -- accel/accel.sh@41 -- # jq -r . 00:17:11.750 [2024-04-18 11:08:19.939198] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:11.750 [2024-04-18 11:08:19.939360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65231 ] 00:17:12.009 [2024-04-18 11:08:20.108226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.267 [2024-04-18 11:08:20.414835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.267 [2024-04-18 11:08:20.415368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.267 [2024-04-18 11:08:20.415216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.267 [2024-04-18 11:08:20.415280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=0xf 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=decompress 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=software 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@22 -- # accel_module=software 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=32 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=32 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=1 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val=Yes 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:12.526 11:08:20 -- accel/accel.sh@20 -- # val= 00:17:12.526 11:08:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # IFS=: 00:17:12.526 11:08:20 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@20 -- # val= 00:17:14.427 11:08:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.427 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.427 11:08:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:14.428 11:08:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:14.428 11:08:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:14.428 00:17:14.428 real 0m2.720s 00:17:14.428 user 0m7.732s 00:17:14.428 sys 0m0.238s 00:17:14.428 ************************************ 00:17:14.428 END TEST accel_decomp_full_mcore 00:17:14.428 ************************************ 00:17:14.428 11:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.428 11:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:14.685 11:08:22 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:14.685 11:08:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:14.685 11:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.685 11:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:14.685 ************************************ 00:17:14.685 START TEST accel_decomp_mthread 00:17:14.685 ************************************ 00:17:14.685 11:08:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:14.685 11:08:22 -- accel/accel.sh@16 -- # local accel_opc 00:17:14.685 11:08:22 -- accel/accel.sh@17 -- # local accel_module 00:17:14.685 11:08:22 -- accel/accel.sh@19 -- # IFS=: 00:17:14.685 11:08:22 -- accel/accel.sh@19 -- # read -r var val 00:17:14.685 11:08:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:14.685 11:08:22 -- accel/accel.sh@12 -- # build_accel_config 00:17:14.685 11:08:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:14.685 11:08:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:14.685 11:08:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:14.685 11:08:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:14.685 11:08:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:14.685 11:08:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:14.685 11:08:22 -- accel/accel.sh@40 -- # local IFS=, 00:17:14.685 11:08:22 -- accel/accel.sh@41 -- # jq -r . 00:17:14.685 [2024-04-18 11:08:22.772674] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:14.685 [2024-04-18 11:08:22.772848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65286 ] 00:17:14.943 [2024-04-18 11:08:22.941787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.201 [2024-04-18 11:08:23.227774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=0x1 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=decompress 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=software 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@22 -- # accel_module=software 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=32 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=32 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=2 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:15.459 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.459 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.459 11:08:23 -- accel/accel.sh@20 -- # val=Yes 00:17:15.460 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.460 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.460 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:15.460 11:08:23 -- accel/accel.sh@20 -- # val= 00:17:15.460 11:08:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # IFS=: 00:17:15.460 11:08:23 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@20 -- # val= 00:17:17.361 11:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:17.361 11:08:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:17.361 11:08:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:17.361 00:17:17.361 real 0m2.593s 00:17:17.361 user 0m2.286s 00:17:17.361 sys 0m0.205s 00:17:17.361 11:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.361 11:08:25 -- common/autotest_common.sh@10 -- # set +x 00:17:17.361 ************************************ 00:17:17.361 END TEST accel_decomp_mthread 00:17:17.361 ************************************ 00:17:17.361 11:08:25 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:17.361 11:08:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:17.361 11:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.361 11:08:25 -- common/autotest_common.sh@10 -- # set +x 00:17:17.361 ************************************ 00:17:17.361 START TEST accel_deomp_full_mthread 00:17:17.361 ************************************ 00:17:17.361 11:08:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:17.361 11:08:25 -- accel/accel.sh@16 -- # local accel_opc 00:17:17.361 11:08:25 -- accel/accel.sh@17 -- # local accel_module 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # IFS=: 00:17:17.361 11:08:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:17.361 11:08:25 -- accel/accel.sh@19 -- # read -r var val 00:17:17.361 11:08:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:17.362 11:08:25 -- accel/accel.sh@12 -- # build_accel_config 00:17:17.362 11:08:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:17.362 11:08:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:17.362 11:08:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:17.362 11:08:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:17.362 11:08:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:17.362 11:08:25 -- accel/accel.sh@40 -- # local IFS=, 00:17:17.362 11:08:25 -- accel/accel.sh@41 -- # jq -r . 00:17:17.362 [2024-04-18 11:08:25.472369] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:17.362 [2024-04-18 11:08:25.472529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65336 ] 00:17:17.620 [2024-04-18 11:08:25.634349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.878 [2024-04-18 11:08:25.891289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=0x1 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=decompress 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=software 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@22 -- # accel_module=software 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=32 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=32 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=2 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val=Yes 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:18.137 11:08:26 -- accel/accel.sh@20 -- # val= 00:17:18.137 11:08:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # IFS=: 00:17:18.137 11:08:26 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@20 -- # val= 00:17:20.038 11:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # IFS=: 00:17:20.038 11:08:28 -- accel/accel.sh@19 -- # read -r var val 00:17:20.038 11:08:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:20.038 11:08:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:20.038 11:08:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:20.038 00:17:20.038 real 0m2.633s 00:17:20.038 user 0m2.349s 00:17:20.038 sys 0m0.189s 00:17:20.038 ************************************ 00:17:20.038 END TEST accel_deomp_full_mthread 00:17:20.038 ************************************ 00:17:20.038 11:08:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.038 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:17:20.038 11:08:28 -- accel/accel.sh@124 -- # [[ n == y ]] 00:17:20.038 11:08:28 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:20.038 11:08:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:20.038 11:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.038 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:17:20.038 11:08:28 -- accel/accel.sh@137 -- # build_accel_config 00:17:20.038 11:08:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:20.039 11:08:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:20.039 11:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:20.039 11:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:20.039 11:08:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:20.039 11:08:28 -- accel/accel.sh@40 -- # local IFS=, 00:17:20.039 11:08:28 -- accel/accel.sh@41 -- # jq -r . 00:17:20.039 ************************************ 00:17:20.039 START TEST accel_dif_functional_tests 00:17:20.039 ************************************ 00:17:20.039 11:08:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:20.039 [2024-04-18 11:08:28.254919] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:20.039 [2024-04-18 11:08:28.255059] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65393 ] 00:17:20.297 [2024-04-18 11:08:28.423279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.555 [2024-04-18 11:08:28.707458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.555 [2024-04-18 11:08:28.707528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.555 [2024-04-18 11:08:28.707547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.814 00:17:20.814 00:17:20.814 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.814 http://cunit.sourceforge.net/ 00:17:20.814 00:17:20.814 00:17:20.814 Suite: accel_dif 00:17:20.814 Test: verify: DIF generated, GUARD check ...passed 00:17:20.814 Test: verify: DIF generated, APPTAG check ...passed 00:17:20.814 Test: verify: DIF generated, REFTAG check ...passed 00:17:20.814 Test: verify: DIF not generated, GUARD check ...passed 00:17:20.814 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 11:08:29.026323] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:20.814 [2024-04-18 11:08:29.026420] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:20.814 passed 00:17:20.814 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 11:08:29.026484] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:20.814 [2024-04-18 11:08:29.026672] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:20.814 [2024-04-18 11:08:29.026730] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:20.814 passed 00:17:20.814 Test: verify: APPTAG correct, APPTAG check ...[2024-04-18 11:08:29.026839] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:20.814 passed 00:17:20.814 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:17:20.814 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-18 11:08:29.027112] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:17:20.814 passed 00:17:20.814 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:17:20.814 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:17:20.814 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:17:20.814 Test: generate copy: DIF generated, GUARD check ...[2024-04-18 11:08:29.027608] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:17:20.814 passed 00:17:20.814 Test: generate copy: DIF generated, APTTAG check ...passed 00:17:20.814 Test: generate copy: DIF generated, REFTAG check ...passed 00:17:20.814 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:17:20.814 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:17:20.814 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:17:20.814 Test: generate copy: iovecs-len validate ...[2024-04-18 11:08:29.028299] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:17:20.814 passed 00:17:20.814 Test: generate copy: buffer alignment validate ...passed 00:17:20.814 00:17:20.814 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.814 suites 1 1 n/a 0 0 00:17:20.814 tests 20 20 20 0 0 00:17:20.814 asserts 204 204 204 0 n/a 00:17:20.814 00:17:20.814 Elapsed time = 0.006 seconds 00:17:22.189 00:17:22.189 real 0m2.034s 00:17:22.189 user 0m3.820s 00:17:22.189 sys 0m0.250s 00:17:22.189 11:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.189 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 ************************************ 00:17:22.189 END TEST accel_dif_functional_tests 00:17:22.189 ************************************ 00:17:22.189 00:17:22.189 real 1m6.283s 00:17:22.189 user 1m9.395s 00:17:22.189 sys 0m7.593s 00:17:22.189 11:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.189 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 ************************************ 00:17:22.189 END TEST accel 00:17:22.189 ************************************ 00:17:22.189 11:08:30 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:22.189 11:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:22.189 11:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.189 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 ************************************ 00:17:22.189 START TEST accel_rpc 00:17:22.189 ************************************ 00:17:22.189 11:08:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:22.448 * Looking for test storage... 00:17:22.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:22.448 11:08:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:22.448 11:08:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65481 00:17:22.448 11:08:30 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:22.448 11:08:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 65481 00:17:22.448 11:08:30 -- common/autotest_common.sh@817 -- # '[' -z 65481 ']' 00:17:22.448 11:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.448 11:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.448 11:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.448 11:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.448 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 [2024-04-18 11:08:30.551466] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:22.448 [2024-04-18 11:08:30.551643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65481 ] 00:17:22.706 [2024-04-18 11:08:30.747254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.964 [2024-04-18 11:08:31.064410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.531 11:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.531 11:08:31 -- common/autotest_common.sh@850 -- # return 0 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:17:23.531 11:08:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:23.531 11:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:23.531 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 ************************************ 00:17:23.531 START TEST accel_assign_opcode 00:17:23.531 ************************************ 00:17:23.531 11:08:31 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:17:23.531 11:08:31 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:17:23.532 11:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.532 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:23.532 [2024-04-18 11:08:31.669751] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:17:23.532 11:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.532 11:08:31 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:17:23.532 11:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.532 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:23.532 [2024-04-18 11:08:31.677657] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:17:23.532 11:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.532 11:08:31 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:17:23.532 11:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.532 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:24.469 11:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.469 11:08:32 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:17:24.469 11:08:32 -- accel/accel_rpc.sh@42 -- # grep software 00:17:24.469 11:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.469 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:17:24.469 11:08:32 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:17:24.469 11:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.469 software 00:17:24.469 00:17:24.469 real 0m0.927s 00:17:24.469 user 0m0.051s 00:17:24.469 sys 0m0.017s 00:17:24.469 11:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.469 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:17:24.469 ************************************ 00:17:24.469 END TEST accel_assign_opcode 00:17:24.469 ************************************ 00:17:24.469 11:08:32 -- accel/accel_rpc.sh@55 -- # killprocess 65481 00:17:24.469 11:08:32 -- common/autotest_common.sh@936 -- # '[' -z 65481 ']' 00:17:24.469 11:08:32 -- common/autotest_common.sh@940 -- # kill -0 65481 00:17:24.469 11:08:32 -- common/autotest_common.sh@941 -- # uname 00:17:24.469 11:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.469 11:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65481 00:17:24.469 11:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.469 11:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.469 11:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65481' 00:17:24.469 killing process with pid 65481 00:17:24.469 11:08:32 -- common/autotest_common.sh@955 -- # kill 65481 00:17:24.469 11:08:32 -- common/autotest_common.sh@960 -- # wait 65481 00:17:27.000 00:17:27.000 real 0m4.702s 00:17:27.000 user 0m4.636s 00:17:27.000 sys 0m0.753s 00:17:27.000 11:08:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.000 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:27.000 ************************************ 00:17:27.000 END TEST accel_rpc 00:17:27.000 ************************************ 00:17:27.000 11:08:35 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:27.000 11:08:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:27.000 11:08:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.000 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:27.000 ************************************ 00:17:27.000 START TEST app_cmdline 00:17:27.000 ************************************ 00:17:27.000 11:08:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:27.258 * Looking for test storage... 00:17:27.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:27.258 11:08:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:27.258 11:08:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65631 00:17:27.258 11:08:35 -- app/cmdline.sh@18 -- # waitforlisten 65631 00:17:27.258 11:08:35 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:27.258 11:08:35 -- common/autotest_common.sh@817 -- # '[' -z 65631 ']' 00:17:27.258 11:08:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.258 11:08:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.258 11:08:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.258 11:08:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.258 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:27.258 [2024-04-18 11:08:35.427786] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:27.258 [2024-04-18 11:08:35.427964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65631 ] 00:17:27.516 [2024-04-18 11:08:35.604083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.774 [2024-04-18 11:08:35.909768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.709 11:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.709 11:08:36 -- common/autotest_common.sh@850 -- # return 0 00:17:28.709 11:08:36 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:28.967 { 00:17:28.967 "fields": { 00:17:28.967 "commit": "65b4e17c6", 00:17:28.967 "major": 24, 00:17:28.967 "minor": 5, 00:17:28.967 "patch": 0, 00:17:28.967 "suffix": "-pre" 00:17:28.967 }, 00:17:28.967 "version": "SPDK v24.05-pre git sha1 65b4e17c6" 00:17:28.967 } 00:17:28.967 11:08:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:17:28.967 11:08:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:28.967 11:08:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:28.967 11:08:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:28.967 11:08:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:28.967 11:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.967 11:08:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:28.967 11:08:37 -- common/autotest_common.sh@10 -- # set +x 00:17:28.967 11:08:37 -- app/cmdline.sh@26 -- # sort 00:17:28.967 11:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.224 11:08:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:29.224 11:08:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:29.224 11:08:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:29.224 11:08:37 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.224 11:08:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:29.224 11:08:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.224 11:08:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.224 11:08:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.224 11:08:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.224 11:08:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.224 11:08:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.224 11:08:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.224 11:08:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:29.224 11:08:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:29.482 2024/04/18 11:08:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:17:29.482 request: 00:17:29.482 { 00:17:29.482 "method": "env_dpdk_get_mem_stats", 00:17:29.482 "params": {} 00:17:29.482 } 00:17:29.483 Got JSON-RPC error response 00:17:29.483 GoRPCClient: error on JSON-RPC call 00:17:29.483 11:08:37 -- common/autotest_common.sh@641 -- # es=1 00:17:29.483 11:08:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:29.483 11:08:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:29.483 11:08:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:29.483 11:08:37 -- app/cmdline.sh@1 -- # killprocess 65631 00:17:29.483 11:08:37 -- common/autotest_common.sh@936 -- # '[' -z 65631 ']' 00:17:29.483 11:08:37 -- common/autotest_common.sh@940 -- # kill -0 65631 00:17:29.483 11:08:37 -- common/autotest_common.sh@941 -- # uname 00:17:29.483 11:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.483 11:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65631 00:17:29.483 11:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.483 killing process with pid 65631 00:17:29.483 11:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.483 11:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65631' 00:17:29.483 11:08:37 -- common/autotest_common.sh@955 -- # kill 65631 00:17:29.483 11:08:37 -- common/autotest_common.sh@960 -- # wait 65631 00:17:32.012 00:17:32.012 real 0m4.819s 00:17:32.012 user 0m5.177s 00:17:32.012 sys 0m0.780s 00:17:32.012 11:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.012 ************************************ 00:17:32.012 END TEST app_cmdline 00:17:32.012 ************************************ 00:17:32.012 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.012 11:08:40 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:32.012 11:08:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:32.012 11:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.012 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.012 ************************************ 00:17:32.012 START TEST version 00:17:32.012 ************************************ 00:17:32.012 11:08:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:32.012 * Looking for test storage... 00:17:32.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:32.012 11:08:40 -- app/version.sh@17 -- # get_header_version major 00:17:32.012 11:08:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:32.012 11:08:40 -- app/version.sh@14 -- # cut -f2 00:17:32.012 11:08:40 -- app/version.sh@14 -- # tr -d '"' 00:17:32.012 11:08:40 -- app/version.sh@17 -- # major=24 00:17:32.012 11:08:40 -- app/version.sh@18 -- # get_header_version minor 00:17:32.012 11:08:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:32.012 11:08:40 -- app/version.sh@14 -- # cut -f2 00:17:32.012 11:08:40 -- app/version.sh@14 -- # tr -d '"' 00:17:32.012 11:08:40 -- app/version.sh@18 -- # minor=5 00:17:32.012 11:08:40 -- app/version.sh@19 -- # get_header_version patch 00:17:32.012 11:08:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:32.012 11:08:40 -- app/version.sh@14 -- # cut -f2 00:17:32.012 11:08:40 -- app/version.sh@14 -- # tr -d '"' 00:17:32.012 11:08:40 -- app/version.sh@19 -- # patch=0 00:17:32.012 11:08:40 -- app/version.sh@20 -- # get_header_version suffix 00:17:32.012 11:08:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:32.012 11:08:40 -- app/version.sh@14 -- # cut -f2 00:17:32.012 11:08:40 -- app/version.sh@14 -- # tr -d '"' 00:17:32.270 11:08:40 -- app/version.sh@20 -- # suffix=-pre 00:17:32.270 11:08:40 -- app/version.sh@22 -- # version=24.5 00:17:32.270 11:08:40 -- app/version.sh@25 -- # (( patch != 0 )) 00:17:32.270 11:08:40 -- app/version.sh@28 -- # version=24.5rc0 00:17:32.270 11:08:40 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:32.270 11:08:40 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:32.270 11:08:40 -- app/version.sh@30 -- # py_version=24.5rc0 00:17:32.270 11:08:40 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:17:32.270 00:17:32.270 real 0m0.153s 00:17:32.270 user 0m0.096s 00:17:32.270 sys 0m0.093s 00:17:32.270 11:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.270 ************************************ 00:17:32.270 END TEST version 00:17:32.270 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 ************************************ 00:17:32.270 11:08:40 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@194 -- # uname -s 00:17:32.270 11:08:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:32.270 11:08:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:32.270 11:08:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:32.270 11:08:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@258 -- # timing_exit lib 00:17:32.270 11:08:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:32.270 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 11:08:40 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:17:32.270 11:08:40 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:17:32.270 11:08:40 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:32.270 11:08:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.270 11:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.270 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 ************************************ 00:17:32.270 START TEST nvmf_tcp 00:17:32.270 ************************************ 00:17:32.270 11:08:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:32.529 * Looking for test storage... 00:17:32.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@10 -- # uname -s 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.529 11:08:40 -- nvmf/common.sh@7 -- # uname -s 00:17:32.529 11:08:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.529 11:08:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.529 11:08:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.529 11:08:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.529 11:08:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.529 11:08:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.529 11:08:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.529 11:08:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.529 11:08:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.529 11:08:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.529 11:08:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:32.529 11:08:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:32.529 11:08:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.529 11:08:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.529 11:08:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.529 11:08:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.529 11:08:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.529 11:08:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.529 11:08:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.529 11:08:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.529 11:08:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@5 -- # export PATH 00:17:32.529 11:08:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- nvmf/common.sh@47 -- # : 0 00:17:32.529 11:08:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.529 11:08:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.529 11:08:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.529 11:08:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.529 11:08:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:17:32.529 11:08:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:32.529 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:17:32.529 11:08:40 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:17:32.529 11:08:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.529 11:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.529 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 ************************************ 00:17:32.529 START TEST nvmf_example 00:17:32.529 ************************************ 00:17:32.529 11:08:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:17:32.529 * Looking for test storage... 00:17:32.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:32.529 11:08:40 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.529 11:08:40 -- nvmf/common.sh@7 -- # uname -s 00:17:32.529 11:08:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.529 11:08:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.529 11:08:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.529 11:08:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.529 11:08:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.529 11:08:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.529 11:08:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.529 11:08:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.529 11:08:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.529 11:08:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.529 11:08:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:32.529 11:08:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:32.529 11:08:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.529 11:08:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.529 11:08:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.529 11:08:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.529 11:08:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.529 11:08:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.529 11:08:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.529 11:08:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.529 11:08:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- paths/export.sh@5 -- # export PATH 00:17:32.529 11:08:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.529 11:08:40 -- nvmf/common.sh@47 -- # : 0 00:17:32.529 11:08:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.529 11:08:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.529 11:08:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.529 11:08:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.529 11:08:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.529 11:08:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.529 11:08:40 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:17:32.529 11:08:40 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:17:32.529 11:08:40 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:17:32.529 11:08:40 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:17:32.529 11:08:40 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:17:32.529 11:08:40 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:17:32.529 11:08:40 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:17:32.529 11:08:40 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:17:32.529 11:08:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:32.529 11:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 11:08:40 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:17:32.530 11:08:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.530 11:08:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.530 11:08:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.530 11:08:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.530 11:08:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.530 11:08:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.530 11:08:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.530 11:08:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.530 11:08:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:32.530 11:08:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:32.530 11:08:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:32.530 11:08:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:32.530 11:08:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:32.530 11:08:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:32.530 11:08:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.530 11:08:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.530 11:08:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:32.530 11:08:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:32.530 11:08:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.530 11:08:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.530 11:08:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.530 11:08:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.530 11:08:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.530 11:08:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.530 11:08:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.530 11:08:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.530 11:08:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:32.788 Cannot find device "nvmf_init_br" 00:17:32.788 11:08:40 -- nvmf/common.sh@154 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:32.788 Cannot find device "nvmf_tgt_br" 00:17:32.788 11:08:40 -- nvmf/common.sh@155 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.788 Cannot find device "nvmf_tgt_br2" 00:17:32.788 11:08:40 -- nvmf/common.sh@156 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:32.788 Cannot find device "nvmf_init_br" 00:17:32.788 11:08:40 -- nvmf/common.sh@157 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:32.788 Cannot find device "nvmf_tgt_br" 00:17:32.788 11:08:40 -- nvmf/common.sh@158 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:32.788 Cannot find device "nvmf_tgt_br2" 00:17:32.788 11:08:40 -- nvmf/common.sh@159 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:32.788 Cannot find device "nvmf_br" 00:17:32.788 11:08:40 -- nvmf/common.sh@160 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:32.788 Cannot find device "nvmf_init_if" 00:17:32.788 11:08:40 -- nvmf/common.sh@161 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.788 11:08:40 -- nvmf/common.sh@162 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.788 11:08:40 -- nvmf/common.sh@163 -- # true 00:17:32.788 11:08:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.788 11:08:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.788 11:08:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.788 11:08:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.788 11:08:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.788 11:08:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.788 11:08:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.788 11:08:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:32.788 11:08:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:32.788 11:08:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:32.788 11:08:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:32.788 11:08:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:32.788 11:08:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:32.788 11:08:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.788 11:08:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.788 11:08:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.788 11:08:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:33.047 11:08:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:33.047 11:08:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:33.047 11:08:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:33.047 11:08:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:33.047 11:08:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:33.047 11:08:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:33.047 11:08:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:33.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:17:33.047 00:17:33.047 --- 10.0.0.2 ping statistics --- 00:17:33.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.047 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:33.047 11:08:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:33.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:33.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:33.047 00:17:33.047 --- 10.0.0.3 ping statistics --- 00:17:33.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.047 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:33.047 11:08:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:33.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:17:33.047 00:17:33.047 --- 10.0.0.1 ping statistics --- 00:17:33.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.047 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:33.047 11:08:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.047 11:08:41 -- nvmf/common.sh@422 -- # return 0 00:17:33.047 11:08:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:33.047 11:08:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.047 11:08:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:33.047 11:08:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:33.047 11:08:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.047 11:08:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:33.047 11:08:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:33.047 11:08:41 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:17:33.047 11:08:41 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:17:33.047 11:08:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:33.047 11:08:41 -- common/autotest_common.sh@10 -- # set +x 00:17:33.047 11:08:41 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:17:33.047 11:08:41 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:17:33.047 11:08:41 -- target/nvmf_example.sh@34 -- # nvmfpid=66037 00:17:33.047 11:08:41 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:17:33.047 11:08:41 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.047 11:08:41 -- target/nvmf_example.sh@36 -- # waitforlisten 66037 00:17:33.047 11:08:41 -- common/autotest_common.sh@817 -- # '[' -z 66037 ']' 00:17:33.047 11:08:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.047 11:08:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.047 11:08:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.047 11:08:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.047 11:08:41 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.423 11:08:42 -- common/autotest_common.sh@850 -- # return 0 00:17:34.423 11:08:42 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:17:34.423 11:08:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.423 11:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.423 11:08:42 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:17:34.423 11:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.423 11:08:42 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:17:34.423 11:08:42 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.423 11:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.423 11:08:42 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:17:34.423 11:08:42 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.423 11:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.423 11:08:42 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.423 11:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.423 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 11:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.423 11:08:42 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:34.423 11:08:42 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:46.624 Initializing NVMe Controllers 00:17:46.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.624 Initialization complete. Launching workers. 00:17:46.624 ======================================================== 00:17:46.624 Latency(us) 00:17:46.624 Device Information : IOPS MiB/s Average min max 00:17:46.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12167.27 47.53 5259.65 1254.50 21117.75 00:17:46.624 ======================================================== 00:17:46.624 Total : 12167.27 47.53 5259.65 1254.50 21117.75 00:17:46.624 00:17:46.624 11:08:52 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:17:46.624 11:08:52 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:17:46.624 11:08:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:46.624 11:08:52 -- nvmf/common.sh@117 -- # sync 00:17:46.624 11:08:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.624 11:08:52 -- nvmf/common.sh@120 -- # set +e 00:17:46.624 11:08:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.624 11:08:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.624 rmmod nvme_tcp 00:17:46.624 rmmod nvme_fabrics 00:17:46.624 rmmod nvme_keyring 00:17:46.624 11:08:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.624 11:08:52 -- nvmf/common.sh@124 -- # set -e 00:17:46.624 11:08:52 -- nvmf/common.sh@125 -- # return 0 00:17:46.624 11:08:52 -- nvmf/common.sh@478 -- # '[' -n 66037 ']' 00:17:46.624 11:08:52 -- nvmf/common.sh@479 -- # killprocess 66037 00:17:46.624 11:08:52 -- common/autotest_common.sh@936 -- # '[' -z 66037 ']' 00:17:46.624 11:08:52 -- common/autotest_common.sh@940 -- # kill -0 66037 00:17:46.624 11:08:52 -- common/autotest_common.sh@941 -- # uname 00:17:46.624 11:08:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:46.624 11:08:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66037 00:17:46.624 11:08:52 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:17:46.624 11:08:52 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:17:46.624 killing process with pid 66037 00:17:46.624 11:08:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66037' 00:17:46.624 11:08:52 -- common/autotest_common.sh@955 -- # kill 66037 00:17:46.624 11:08:52 -- common/autotest_common.sh@960 -- # wait 66037 00:17:46.624 nvmf threads initialize successfully 00:17:46.624 bdev subsystem init successfully 00:17:46.624 created a nvmf target service 00:17:46.624 create targets's poll groups done 00:17:46.625 all subsystems of target started 00:17:46.625 nvmf target is running 00:17:46.625 all subsystems of target stopped 00:17:46.625 destroy targets's poll groups done 00:17:46.625 destroyed the nvmf target service 00:17:46.625 bdev subsystem finish successfully 00:17:46.625 nvmf threads destroy successfully 00:17:46.625 11:08:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:46.625 11:08:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:46.625 11:08:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:46.625 11:08:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.625 11:08:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.625 11:08:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.625 11:08:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.625 11:08:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.625 11:08:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:46.625 11:08:54 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:17:46.625 11:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.625 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 00:17:46.625 real 0m13.658s 00:17:46.625 user 0m48.440s 00:17:46.625 sys 0m2.008s 00:17:46.625 11:08:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.625 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 ************************************ 00:17:46.625 END TEST nvmf_example 00:17:46.625 ************************************ 00:17:46.625 11:08:54 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:46.625 11:08:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.625 11:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.625 11:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 ************************************ 00:17:46.625 START TEST nvmf_filesystem 00:17:46.625 ************************************ 00:17:46.625 11:08:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:46.625 * Looking for test storage... 00:17:46.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.625 11:08:54 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:17:46.625 11:08:54 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:46.625 11:08:54 -- common/autotest_common.sh@34 -- # set -e 00:17:46.625 11:08:54 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:46.625 11:08:54 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:46.625 11:08:54 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:17:46.625 11:08:54 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:46.625 11:08:54 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:46.625 11:08:54 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:17:46.625 11:08:54 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:46.625 11:08:54 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:46.625 11:08:54 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:17:46.625 11:08:54 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:46.625 11:08:54 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:46.625 11:08:54 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:46.625 11:08:54 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:46.625 11:08:54 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:46.625 11:08:54 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:46.625 11:08:54 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:46.625 11:08:54 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:46.625 11:08:54 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:46.625 11:08:54 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:46.625 11:08:54 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:46.625 11:08:54 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:46.625 11:08:54 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:46.625 11:08:54 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:46.625 11:08:54 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:46.625 11:08:54 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:46.625 11:08:54 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:46.625 11:08:54 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:46.625 11:08:54 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:46.625 11:08:54 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:46.625 11:08:54 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:46.625 11:08:54 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:46.625 11:08:54 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:46.625 11:08:54 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:46.625 11:08:54 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:46.625 11:08:54 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:46.625 11:08:54 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:46.625 11:08:54 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:46.625 11:08:54 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:17:46.625 11:08:54 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:46.625 11:08:54 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:46.625 11:08:54 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:46.625 11:08:54 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:46.625 11:08:54 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:46.625 11:08:54 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:46.625 11:08:54 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:46.625 11:08:54 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:17:46.625 11:08:54 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:17:46.625 11:08:54 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:46.625 11:08:54 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:17:46.625 11:08:54 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:17:46.625 11:08:54 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:17:46.625 11:08:54 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:17:46.625 11:08:54 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:17:46.625 11:08:54 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:17:46.625 11:08:54 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:17:46.625 11:08:54 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:17:46.625 11:08:54 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:17:46.625 11:08:54 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:17:46.625 11:08:54 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:17:46.625 11:08:54 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:17:46.625 11:08:54 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:17:46.625 11:08:54 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:17:46.625 11:08:54 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:17:46.625 11:08:54 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:17:46.625 11:08:54 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:17:46.625 11:08:54 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:17:46.625 11:08:54 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:46.625 11:08:54 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:17:46.625 11:08:54 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:17:46.625 11:08:54 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:17:46.625 11:08:54 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:17:46.625 11:08:54 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:17:46.625 11:08:54 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:17:46.625 11:08:54 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:17:46.625 11:08:54 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:17:46.625 11:08:54 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:17:46.626 11:08:54 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:17:46.626 11:08:54 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:17:46.626 11:08:54 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:46.626 11:08:54 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:17:46.626 11:08:54 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:17:46.626 11:08:54 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:46.626 11:08:54 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:46.626 11:08:54 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:17:46.626 11:08:54 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:17:46.626 11:08:54 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:17:46.626 11:08:54 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:17:46.626 11:08:54 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:17:46.626 11:08:54 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:17:46.626 11:08:54 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:46.626 11:08:54 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:46.626 11:08:54 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:46.626 11:08:54 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:46.626 11:08:54 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:46.626 11:08:54 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:46.626 11:08:54 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:17:46.626 11:08:54 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:46.626 #define SPDK_CONFIG_H 00:17:46.626 #define SPDK_CONFIG_APPS 1 00:17:46.626 #define SPDK_CONFIG_ARCH native 00:17:46.626 #define SPDK_CONFIG_ASAN 1 00:17:46.626 #define SPDK_CONFIG_AVAHI 1 00:17:46.626 #undef SPDK_CONFIG_CET 00:17:46.626 #define SPDK_CONFIG_COVERAGE 1 00:17:46.626 #define SPDK_CONFIG_CROSS_PREFIX 00:17:46.626 #undef SPDK_CONFIG_CRYPTO 00:17:46.626 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:46.626 #undef SPDK_CONFIG_CUSTOMOCF 00:17:46.626 #undef SPDK_CONFIG_DAOS 00:17:46.626 #define SPDK_CONFIG_DAOS_DIR 00:17:46.626 #define SPDK_CONFIG_DEBUG 1 00:17:46.626 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:46.626 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:46.626 #define SPDK_CONFIG_DPDK_INC_DIR 00:17:46.626 #define SPDK_CONFIG_DPDK_LIB_DIR 00:17:46.626 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:46.626 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:46.626 #define SPDK_CONFIG_EXAMPLES 1 00:17:46.626 #undef SPDK_CONFIG_FC 00:17:46.626 #define SPDK_CONFIG_FC_PATH 00:17:46.626 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:46.626 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:46.626 #undef SPDK_CONFIG_FUSE 00:17:46.626 #undef SPDK_CONFIG_FUZZER 00:17:46.626 #define SPDK_CONFIG_FUZZER_LIB 00:17:46.626 #define SPDK_CONFIG_GOLANG 1 00:17:46.626 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:17:46.626 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:17:46.626 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:46.626 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:17:46.626 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:46.626 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:46.626 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:17:46.626 #define SPDK_CONFIG_IDXD 1 00:17:46.626 #undef SPDK_CONFIG_IDXD_KERNEL 00:17:46.626 #undef SPDK_CONFIG_IPSEC_MB 00:17:46.626 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:46.626 #define SPDK_CONFIG_ISAL 1 00:17:46.626 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:17:46.626 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:17:46.626 #define SPDK_CONFIG_LIBDIR 00:17:46.626 #undef SPDK_CONFIG_LTO 00:17:46.626 #define SPDK_CONFIG_MAX_LCORES 00:17:46.626 #define SPDK_CONFIG_NVME_CUSE 1 00:17:46.626 #undef SPDK_CONFIG_OCF 00:17:46.626 #define SPDK_CONFIG_OCF_PATH 00:17:46.626 #define SPDK_CONFIG_OPENSSL_PATH 00:17:46.626 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:46.626 #define SPDK_CONFIG_PGO_DIR 00:17:46.626 #undef SPDK_CONFIG_PGO_USE 00:17:46.626 #define SPDK_CONFIG_PREFIX /usr/local 00:17:46.626 #undef SPDK_CONFIG_RAID5F 00:17:46.626 #undef SPDK_CONFIG_RBD 00:17:46.626 #define SPDK_CONFIG_RDMA 1 00:17:46.626 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:46.626 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:46.626 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:17:46.626 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:46.626 #define SPDK_CONFIG_SHARED 1 00:17:46.626 #undef SPDK_CONFIG_SMA 00:17:46.626 #define SPDK_CONFIG_TESTS 1 00:17:46.626 #undef SPDK_CONFIG_TSAN 00:17:46.626 #define SPDK_CONFIG_UBLK 1 00:17:46.626 #define SPDK_CONFIG_UBSAN 1 00:17:46.626 #undef SPDK_CONFIG_UNIT_TESTS 00:17:46.626 #undef SPDK_CONFIG_URING 00:17:46.626 #define SPDK_CONFIG_URING_PATH 00:17:46.626 #undef SPDK_CONFIG_URING_ZNS 00:17:46.626 #define SPDK_CONFIG_USDT 1 00:17:46.626 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:46.626 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:46.626 #undef SPDK_CONFIG_VFIO_USER 00:17:46.626 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:46.626 #define SPDK_CONFIG_VHOST 1 00:17:46.626 #define SPDK_CONFIG_VIRTIO 1 00:17:46.626 #undef SPDK_CONFIG_VTUNE 00:17:46.626 #define SPDK_CONFIG_VTUNE_DIR 00:17:46.626 #define SPDK_CONFIG_WERROR 1 00:17:46.626 #define SPDK_CONFIG_WPDK_DIR 00:17:46.626 #undef SPDK_CONFIG_XNVME 00:17:46.626 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:46.626 11:08:54 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:46.626 11:08:54 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.626 11:08:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.626 11:08:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.626 11:08:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.626 11:08:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.626 11:08:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.626 11:08:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.626 11:08:54 -- paths/export.sh@5 -- # export PATH 00:17:46.626 11:08:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.626 11:08:54 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:46.626 11:08:54 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:46.626 11:08:54 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:46.626 11:08:54 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:46.626 11:08:54 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:17:46.626 11:08:54 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:17:46.626 11:08:54 -- pm/common@67 -- # TEST_TAG=N/A 00:17:46.626 11:08:54 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:17:46.626 11:08:54 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:17:46.626 11:08:54 -- pm/common@71 -- # uname -s 00:17:46.626 11:08:54 -- pm/common@71 -- # PM_OS=Linux 00:17:46.626 11:08:54 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:17:46.626 11:08:54 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:17:46.626 11:08:54 -- pm/common@76 -- # [[ Linux == Linux ]] 00:17:46.626 11:08:54 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:17:46.626 11:08:54 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:17:46.626 11:08:54 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:17:46.626 11:08:54 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:17:46.626 11:08:54 -- common/autotest_common.sh@57 -- # : 0 00:17:46.626 11:08:54 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:17:46.626 11:08:54 -- common/autotest_common.sh@61 -- # : 0 00:17:46.626 11:08:54 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:46.626 11:08:54 -- common/autotest_common.sh@63 -- # : 0 00:17:46.626 11:08:54 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:17:46.626 11:08:54 -- common/autotest_common.sh@65 -- # : 1 00:17:46.626 11:08:54 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:46.627 11:08:54 -- common/autotest_common.sh@67 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:17:46.627 11:08:54 -- common/autotest_common.sh@69 -- # : 00:17:46.627 11:08:54 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:17:46.627 11:08:54 -- common/autotest_common.sh@71 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:17:46.627 11:08:54 -- common/autotest_common.sh@73 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:17:46.627 11:08:54 -- common/autotest_common.sh@75 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:17:46.627 11:08:54 -- common/autotest_common.sh@77 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:46.627 11:08:54 -- common/autotest_common.sh@79 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:17:46.627 11:08:54 -- common/autotest_common.sh@81 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:17:46.627 11:08:54 -- common/autotest_common.sh@83 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:17:46.627 11:08:54 -- common/autotest_common.sh@85 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:17:46.627 11:08:54 -- common/autotest_common.sh@87 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:17:46.627 11:08:54 -- common/autotest_common.sh@89 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:17:46.627 11:08:54 -- common/autotest_common.sh@91 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:17:46.627 11:08:54 -- common/autotest_common.sh@93 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:17:46.627 11:08:54 -- common/autotest_common.sh@95 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:46.627 11:08:54 -- common/autotest_common.sh@97 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:17:46.627 11:08:54 -- common/autotest_common.sh@99 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:17:46.627 11:08:54 -- common/autotest_common.sh@101 -- # : tcp 00:17:46.627 11:08:54 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:46.627 11:08:54 -- common/autotest_common.sh@103 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:17:46.627 11:08:54 -- common/autotest_common.sh@105 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:17:46.627 11:08:54 -- common/autotest_common.sh@107 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:17:46.627 11:08:54 -- common/autotest_common.sh@109 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:17:46.627 11:08:54 -- common/autotest_common.sh@111 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:17:46.627 11:08:54 -- common/autotest_common.sh@113 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:17:46.627 11:08:54 -- common/autotest_common.sh@115 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:17:46.627 11:08:54 -- common/autotest_common.sh@117 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:46.627 11:08:54 -- common/autotest_common.sh@119 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:17:46.627 11:08:54 -- common/autotest_common.sh@121 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:17:46.627 11:08:54 -- common/autotest_common.sh@123 -- # : 00:17:46.627 11:08:54 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:46.627 11:08:54 -- common/autotest_common.sh@125 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:17:46.627 11:08:54 -- common/autotest_common.sh@127 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:17:46.627 11:08:54 -- common/autotest_common.sh@129 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:17:46.627 11:08:54 -- common/autotest_common.sh@131 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:17:46.627 11:08:54 -- common/autotest_common.sh@133 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:17:46.627 11:08:54 -- common/autotest_common.sh@135 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:17:46.627 11:08:54 -- common/autotest_common.sh@137 -- # : 00:17:46.627 11:08:54 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:17:46.627 11:08:54 -- common/autotest_common.sh@139 -- # : true 00:17:46.627 11:08:54 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:17:46.627 11:08:54 -- common/autotest_common.sh@141 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:17:46.627 11:08:54 -- common/autotest_common.sh@143 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:17:46.627 11:08:54 -- common/autotest_common.sh@145 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:17:46.627 11:08:54 -- common/autotest_common.sh@147 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:17:46.627 11:08:54 -- common/autotest_common.sh@149 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:17:46.627 11:08:54 -- common/autotest_common.sh@151 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:17:46.627 11:08:54 -- common/autotest_common.sh@153 -- # : 00:17:46.627 11:08:54 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:17:46.627 11:08:54 -- common/autotest_common.sh@155 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:17:46.627 11:08:54 -- common/autotest_common.sh@157 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:17:46.627 11:08:54 -- common/autotest_common.sh@159 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:17:46.627 11:08:54 -- common/autotest_common.sh@161 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:17:46.627 11:08:54 -- common/autotest_common.sh@163 -- # : 0 00:17:46.627 11:08:54 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:17:46.627 11:08:54 -- common/autotest_common.sh@166 -- # : 00:17:46.627 11:08:54 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:17:46.627 11:08:54 -- common/autotest_common.sh@168 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:17:46.627 11:08:54 -- common/autotest_common.sh@170 -- # : 1 00:17:46.627 11:08:54 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:46.627 11:08:54 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:46.627 11:08:54 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:46.627 11:08:54 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:46.627 11:08:54 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:46.627 11:08:54 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:46.627 11:08:54 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:46.627 11:08:54 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:17:46.627 11:08:54 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:46.627 11:08:54 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:46.628 11:08:54 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:46.628 11:08:54 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:46.628 11:08:54 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:46.628 11:08:54 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:17:46.628 11:08:54 -- common/autotest_common.sh@199 -- # cat 00:17:46.628 11:08:54 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:17:46.628 11:08:54 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:46.628 11:08:54 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:46.628 11:08:54 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:46.628 11:08:54 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:46.628 11:08:54 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:17:46.628 11:08:54 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:17:46.628 11:08:54 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:46.628 11:08:54 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:46.628 11:08:54 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:46.628 11:08:54 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:46.628 11:08:54 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:46.628 11:08:54 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:46.628 11:08:54 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:46.628 11:08:54 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:46.628 11:08:54 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:46.628 11:08:54 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:46.628 11:08:54 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:46.628 11:08:54 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:46.628 11:08:54 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:17:46.628 11:08:54 -- common/autotest_common.sh@252 -- # export valgrind= 00:17:46.628 11:08:54 -- common/autotest_common.sh@252 -- # valgrind= 00:17:46.628 11:08:54 -- common/autotest_common.sh@258 -- # uname -s 00:17:46.628 11:08:54 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:17:46.628 11:08:54 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:17:46.628 11:08:54 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:17:46.628 11:08:54 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:17:46.628 11:08:54 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@268 -- # MAKE=make 00:17:46.628 11:08:54 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:17:46.628 11:08:54 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:17:46.628 11:08:54 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:17:46.628 11:08:54 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:17:46.628 11:08:54 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:17:46.628 11:08:54 -- common/autotest_common.sh@289 -- # for i in "$@" 00:17:46.628 11:08:54 -- common/autotest_common.sh@290 -- # case "$i" in 00:17:46.628 11:08:54 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:17:46.628 11:08:54 -- common/autotest_common.sh@307 -- # [[ -z 66300 ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@307 -- # kill -0 66300 00:17:46.628 11:08:54 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:17:46.628 11:08:54 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:17:46.628 11:08:54 -- common/autotest_common.sh@320 -- # local mount target_dir 00:17:46.628 11:08:54 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:17:46.628 11:08:54 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:17:46.628 11:08:54 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:17:46.628 11:08:54 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:17:46.628 11:08:54 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.nwcAP4 00:17:46.628 11:08:54 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:46.628 11:08:54 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:17:46.628 11:08:54 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.nwcAP4/tests/target /tmp/spdk.nwcAP4 00:17:46.628 11:08:54 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@316 -- # df -T 00:17:46.628 11:08:54 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=6265278464 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=13793939456 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230948352 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=13793939456 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230948352 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:17:46.628 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:17:46.628 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:17:46.628 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:17:46.628 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:17:46.629 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:17:46.629 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267756544 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267895808 00:17:46.629 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=139264 00:17:46.629 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:17:46.629 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:17:46.629 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:17:46.629 11:08:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=92908761088 00:17:46.629 11:08:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:17:46.629 11:08:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=6794018816 00:17:46.629 11:08:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:46.629 11:08:54 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:17:46.629 * Looking for test storage... 00:17:46.629 11:08:54 -- common/autotest_common.sh@357 -- # local target_space new_size 00:17:46.629 11:08:54 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:17:46.629 11:08:54 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.629 11:08:54 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:46.629 11:08:54 -- common/autotest_common.sh@361 -- # mount=/home 00:17:46.629 11:08:54 -- common/autotest_common.sh@363 -- # target_space=13793939456 00:17:46.629 11:08:54 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:17:46.629 11:08:54 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:17:46.629 11:08:54 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:17:46.629 11:08:54 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:17:46.629 11:08:54 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:17:46.629 11:08:54 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.629 11:08:54 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.629 11:08:54 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.629 11:08:54 -- common/autotest_common.sh@378 -- # return 0 00:17:46.629 11:08:54 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:17:46.629 11:08:54 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:17:46.629 11:08:54 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:46.629 11:08:54 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:46.629 11:08:54 -- common/autotest_common.sh@1673 -- # true 00:17:46.629 11:08:54 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:17:46.629 11:08:54 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:17:46.629 11:08:54 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:17:46.629 11:08:54 -- common/autotest_common.sh@27 -- # exec 00:17:46.629 11:08:54 -- common/autotest_common.sh@29 -- # exec 00:17:46.629 11:08:54 -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:46.629 11:08:54 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:46.629 11:08:54 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:46.629 11:08:54 -- common/autotest_common.sh@18 -- # set -x 00:17:46.629 11:08:54 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.629 11:08:54 -- nvmf/common.sh@7 -- # uname -s 00:17:46.629 11:08:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.629 11:08:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.629 11:08:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.629 11:08:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.629 11:08:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.629 11:08:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.629 11:08:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.629 11:08:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.629 11:08:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.629 11:08:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:46.629 11:08:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:17:46.629 11:08:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.629 11:08:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.629 11:08:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.629 11:08:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.629 11:08:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.629 11:08:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.629 11:08:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.629 11:08:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.629 11:08:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.629 11:08:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.629 11:08:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.629 11:08:54 -- paths/export.sh@5 -- # export PATH 00:17:46.629 11:08:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.629 11:08:54 -- nvmf/common.sh@47 -- # : 0 00:17:46.629 11:08:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.629 11:08:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.629 11:08:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.629 11:08:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.629 11:08:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.629 11:08:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.629 11:08:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.629 11:08:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.629 11:08:54 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:17:46.629 11:08:54 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:46.629 11:08:54 -- target/filesystem.sh@15 -- # nvmftestinit 00:17:46.629 11:08:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:46.629 11:08:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.629 11:08:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:46.629 11:08:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:46.629 11:08:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:46.629 11:08:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.629 11:08:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.629 11:08:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.629 11:08:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:46.629 11:08:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:46.629 11:08:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.629 11:08:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.629 11:08:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.629 11:08:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:46.629 11:08:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.629 11:08:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.629 11:08:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.629 11:08:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.629 11:08:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.629 11:08:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.630 11:08:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.630 11:08:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.630 11:08:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:46.630 11:08:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.630 Cannot find device "nvmf_tgt_br" 00:17:46.630 11:08:54 -- nvmf/common.sh@155 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.630 Cannot find device "nvmf_tgt_br2" 00:17:46.630 11:08:54 -- nvmf/common.sh@156 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.630 11:08:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.630 Cannot find device "nvmf_tgt_br" 00:17:46.630 11:08:54 -- nvmf/common.sh@158 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.630 Cannot find device "nvmf_tgt_br2" 00:17:46.630 11:08:54 -- nvmf/common.sh@159 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.630 11:08:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.630 11:08:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.630 11:08:54 -- nvmf/common.sh@162 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.630 11:08:54 -- nvmf/common.sh@163 -- # true 00:17:46.630 11:08:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.630 11:08:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.630 11:08:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.630 11:08:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.630 11:08:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.630 11:08:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.887 11:08:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.887 11:08:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:46.887 11:08:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:46.887 11:08:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:46.887 11:08:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:46.887 11:08:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:46.887 11:08:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:46.887 11:08:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.887 11:08:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.887 11:08:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.887 11:08:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:46.887 11:08:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:46.887 11:08:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.887 11:08:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.888 11:08:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.888 11:08:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.888 11:08:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.888 11:08:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:46.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:46.888 00:17:46.888 --- 10.0.0.2 ping statistics --- 00:17:46.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.888 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:46.888 11:08:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:46.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:46.888 00:17:46.888 --- 10.0.0.3 ping statistics --- 00:17:46.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.888 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:46.888 11:08:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:46.888 00:17:46.888 --- 10.0.0.1 ping statistics --- 00:17:46.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.888 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:46.888 11:08:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.888 11:08:54 -- nvmf/common.sh@422 -- # return 0 00:17:46.888 11:08:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:46.888 11:08:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.888 11:08:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:46.888 11:08:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:46.888 11:08:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.888 11:08:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:46.888 11:08:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:46.888 11:08:55 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:17:46.888 11:08:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.888 11:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.888 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:17:46.888 ************************************ 00:17:46.888 START TEST nvmf_filesystem_no_in_capsule 00:17:46.888 ************************************ 00:17:46.888 11:08:55 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:17:46.888 11:08:55 -- target/filesystem.sh@47 -- # in_capsule=0 00:17:46.888 11:08:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:46.888 11:08:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:46.888 11:08:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.888 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:17:46.888 11:08:55 -- nvmf/common.sh@470 -- # nvmfpid=66462 00:17:46.888 11:08:55 -- nvmf/common.sh@471 -- # waitforlisten 66462 00:17:46.888 11:08:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.888 11:08:55 -- common/autotest_common.sh@817 -- # '[' -z 66462 ']' 00:17:46.888 11:08:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.888 11:08:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.888 11:08:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.888 11:08:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.888 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:17:47.146 [2024-04-18 11:08:55.219610] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:47.146 [2024-04-18 11:08:55.219785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.405 [2024-04-18 11:08:55.405469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.663 [2024-04-18 11:08:55.711915] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.663 [2024-04-18 11:08:55.711993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.663 [2024-04-18 11:08:55.712018] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.663 [2024-04-18 11:08:55.712034] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.663 [2024-04-18 11:08:55.712050] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.663 [2024-04-18 11:08:55.712290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.663 [2024-04-18 11:08:55.712657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.663 [2024-04-18 11:08:55.712709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.663 [2024-04-18 11:08:55.712977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.230 11:08:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.230 11:08:56 -- common/autotest_common.sh@850 -- # return 0 00:17:48.230 11:08:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:48.230 11:08:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:48.230 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.230 11:08:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.230 11:08:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:48.230 11:08:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:48.230 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.230 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.230 [2024-04-18 11:08:56.222787] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.230 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.230 11:08:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:48.230 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.230 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 Malloc1 00:17:48.797 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.797 11:08:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:48.797 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.797 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.797 11:08:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:48.797 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.797 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.797 11:08:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.797 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.797 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 [2024-04-18 11:08:56.809606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.797 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.797 11:08:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:48.797 11:08:56 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:48.797 11:08:56 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:48.797 11:08:56 -- common/autotest_common.sh@1366 -- # local bs 00:17:48.797 11:08:56 -- common/autotest_common.sh@1367 -- # local nb 00:17:48.797 11:08:56 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:48.797 11:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.797 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 11:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.797 11:08:56 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:48.797 { 00:17:48.797 "aliases": [ 00:17:48.797 "2c3daddc-31b8-4478-8133-334973ad3ba3" 00:17:48.797 ], 00:17:48.797 "assigned_rate_limits": { 00:17:48.797 "r_mbytes_per_sec": 0, 00:17:48.797 "rw_ios_per_sec": 0, 00:17:48.797 "rw_mbytes_per_sec": 0, 00:17:48.797 "w_mbytes_per_sec": 0 00:17:48.797 }, 00:17:48.797 "block_size": 512, 00:17:48.797 "claim_type": "exclusive_write", 00:17:48.797 "claimed": true, 00:17:48.797 "driver_specific": {}, 00:17:48.797 "memory_domains": [ 00:17:48.797 { 00:17:48.797 "dma_device_id": "system", 00:17:48.797 "dma_device_type": 1 00:17:48.797 }, 00:17:48.797 { 00:17:48.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.797 "dma_device_type": 2 00:17:48.797 } 00:17:48.797 ], 00:17:48.797 "name": "Malloc1", 00:17:48.797 "num_blocks": 1048576, 00:17:48.797 "product_name": "Malloc disk", 00:17:48.797 "supported_io_types": { 00:17:48.797 "abort": true, 00:17:48.797 "compare": false, 00:17:48.797 "compare_and_write": false, 00:17:48.797 "flush": true, 00:17:48.797 "nvme_admin": false, 00:17:48.797 "nvme_io": false, 00:17:48.797 "read": true, 00:17:48.797 "reset": true, 00:17:48.797 "unmap": true, 00:17:48.797 "write": true, 00:17:48.797 "write_zeroes": true 00:17:48.797 }, 00:17:48.797 "uuid": "2c3daddc-31b8-4478-8133-334973ad3ba3", 00:17:48.797 "zoned": false 00:17:48.797 } 00:17:48.797 ]' 00:17:48.797 11:08:56 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:48.797 11:08:56 -- common/autotest_common.sh@1369 -- # bs=512 00:17:48.797 11:08:56 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:48.797 11:08:56 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:48.797 11:08:56 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:48.797 11:08:56 -- common/autotest_common.sh@1374 -- # echo 512 00:17:48.797 11:08:56 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:48.797 11:08:56 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.055 11:08:57 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.055 11:08:57 -- common/autotest_common.sh@1184 -- # local i=0 00:17:49.055 11:08:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.055 11:08:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:49.055 11:08:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:50.956 11:08:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:50.956 11:08:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:50.956 11:08:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.956 11:08:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:50.956 11:08:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.956 11:08:59 -- common/autotest_common.sh@1194 -- # return 0 00:17:50.956 11:08:59 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:50.956 11:08:59 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:50.956 11:08:59 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:50.956 11:08:59 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:50.956 11:08:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:50.956 11:08:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:50.956 11:08:59 -- setup/common.sh@80 -- # echo 536870912 00:17:50.956 11:08:59 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:50.956 11:08:59 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:50.956 11:08:59 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:50.956 11:08:59 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:51.214 11:08:59 -- target/filesystem.sh@69 -- # partprobe 00:17:51.214 11:08:59 -- target/filesystem.sh@70 -- # sleep 1 00:17:52.149 11:09:00 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:52.149 11:09:00 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:52.149 11:09:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:52.149 11:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.149 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:17:52.149 ************************************ 00:17:52.149 START TEST filesystem_ext4 00:17:52.149 ************************************ 00:17:52.149 11:09:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:52.149 11:09:00 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:52.149 11:09:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:52.149 11:09:00 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:52.149 11:09:00 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:52.149 11:09:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:52.149 11:09:00 -- common/autotest_common.sh@914 -- # local i=0 00:17:52.149 11:09:00 -- common/autotest_common.sh@915 -- # local force 00:17:52.149 11:09:00 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:52.149 11:09:00 -- common/autotest_common.sh@918 -- # force=-F 00:17:52.149 11:09:00 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:52.149 mke2fs 1.46.5 (30-Dec-2021) 00:17:52.408 Discarding device blocks: 0/522240 done 00:17:52.408 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:52.408 Filesystem UUID: 96656113-b61d-4bfc-80c3-e668f3863276 00:17:52.408 Superblock backups stored on blocks: 00:17:52.408 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:52.408 00:17:52.408 Allocating group tables: 0/64 done 00:17:52.408 Writing inode tables: 0/64 done 00:17:52.408 Creating journal (8192 blocks): done 00:17:52.408 Writing superblocks and filesystem accounting information: 0/64 done 00:17:52.408 00:17:52.408 11:09:00 -- common/autotest_common.sh@931 -- # return 0 00:17:52.408 11:09:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:52.666 11:09:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:52.666 11:09:00 -- target/filesystem.sh@25 -- # sync 00:17:52.666 11:09:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:52.666 11:09:00 -- target/filesystem.sh@27 -- # sync 00:17:52.666 11:09:00 -- target/filesystem.sh@29 -- # i=0 00:17:52.666 11:09:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:52.666 11:09:00 -- target/filesystem.sh@37 -- # kill -0 66462 00:17:52.666 11:09:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:52.666 11:09:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:52.666 11:09:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:52.666 11:09:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:52.666 00:17:52.666 real 0m0.474s 00:17:52.666 user 0m0.025s 00:17:52.666 sys 0m0.051s 00:17:52.666 11:09:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:52.666 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:17:52.666 ************************************ 00:17:52.666 END TEST filesystem_ext4 00:17:52.666 ************************************ 00:17:52.666 11:09:00 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:52.666 11:09:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:52.666 11:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.666 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:17:52.925 ************************************ 00:17:52.925 START TEST filesystem_btrfs 00:17:52.925 ************************************ 00:17:52.925 11:09:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:52.925 11:09:00 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:52.925 11:09:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:52.925 11:09:00 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:52.925 11:09:00 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:52.925 11:09:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:52.925 11:09:00 -- common/autotest_common.sh@914 -- # local i=0 00:17:52.925 11:09:00 -- common/autotest_common.sh@915 -- # local force 00:17:52.925 11:09:00 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:52.925 11:09:00 -- common/autotest_common.sh@920 -- # force=-f 00:17:52.925 11:09:00 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:52.925 btrfs-progs v6.6.2 00:17:52.925 See https://btrfs.readthedocs.io for more information. 00:17:52.925 00:17:52.925 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:52.925 NOTE: several default settings have changed in version 5.15, please make sure 00:17:52.925 this does not affect your deployments: 00:17:52.925 - DUP for metadata (-m dup) 00:17:52.925 - enabled no-holes (-O no-holes) 00:17:52.925 - enabled free-space-tree (-R free-space-tree) 00:17:52.925 00:17:52.925 Label: (null) 00:17:52.925 UUID: 3d674d97-426f-4575-a77e-8b69515d86ae 00:17:52.925 Node size: 16384 00:17:52.925 Sector size: 4096 00:17:52.925 Filesystem size: 510.00MiB 00:17:52.925 Block group profiles: 00:17:52.925 Data: single 8.00MiB 00:17:52.925 Metadata: DUP 32.00MiB 00:17:52.925 System: DUP 8.00MiB 00:17:52.925 SSD detected: yes 00:17:52.925 Zoned device: no 00:17:52.925 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:52.925 Runtime features: free-space-tree 00:17:52.925 Checksum: crc32c 00:17:52.925 Number of devices: 1 00:17:52.925 Devices: 00:17:52.925 ID SIZE PATH 00:17:52.925 1 510.00MiB /dev/nvme0n1p1 00:17:52.925 00:17:52.925 11:09:01 -- common/autotest_common.sh@931 -- # return 0 00:17:52.925 11:09:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:52.925 11:09:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:52.925 11:09:01 -- target/filesystem.sh@25 -- # sync 00:17:53.186 11:09:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:53.186 11:09:01 -- target/filesystem.sh@27 -- # sync 00:17:53.186 11:09:01 -- target/filesystem.sh@29 -- # i=0 00:17:53.186 11:09:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:53.186 11:09:01 -- target/filesystem.sh@37 -- # kill -0 66462 00:17:53.186 11:09:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:53.186 11:09:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:53.186 11:09:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:53.186 11:09:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:53.186 00:17:53.186 real 0m0.249s 00:17:53.186 user 0m0.018s 00:17:53.186 sys 0m0.066s 00:17:53.186 11:09:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.186 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:17:53.186 ************************************ 00:17:53.186 END TEST filesystem_btrfs 00:17:53.186 ************************************ 00:17:53.186 11:09:01 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:53.186 11:09:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:53.186 11:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.186 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:17:53.186 ************************************ 00:17:53.186 START TEST filesystem_xfs 00:17:53.186 ************************************ 00:17:53.186 11:09:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:53.186 11:09:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:53.186 11:09:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:53.186 11:09:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:53.186 11:09:01 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:53.187 11:09:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:53.187 11:09:01 -- common/autotest_common.sh@914 -- # local i=0 00:17:53.187 11:09:01 -- common/autotest_common.sh@915 -- # local force 00:17:53.187 11:09:01 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:53.187 11:09:01 -- common/autotest_common.sh@920 -- # force=-f 00:17:53.187 11:09:01 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:53.446 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:53.446 = sectsz=512 attr=2, projid32bit=1 00:17:53.446 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:53.446 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:53.446 data = bsize=4096 blocks=130560, imaxpct=25 00:17:53.446 = sunit=0 swidth=0 blks 00:17:53.446 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:53.446 log =internal log bsize=4096 blocks=16384, version=2 00:17:53.446 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:53.446 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:54.013 Discarding blocks...Done. 00:17:54.013 11:09:02 -- common/autotest_common.sh@931 -- # return 0 00:17:54.013 11:09:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:56.553 11:09:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:56.553 11:09:04 -- target/filesystem.sh@25 -- # sync 00:17:56.553 11:09:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:56.553 11:09:04 -- target/filesystem.sh@27 -- # sync 00:17:56.553 11:09:04 -- target/filesystem.sh@29 -- # i=0 00:17:56.553 11:09:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:56.553 11:09:04 -- target/filesystem.sh@37 -- # kill -0 66462 00:17:56.553 11:09:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:56.553 11:09:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:56.553 11:09:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:56.553 11:09:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:56.553 00:17:56.553 real 0m3.242s 00:17:56.553 user 0m0.019s 00:17:56.553 sys 0m0.067s 00:17:56.553 11:09:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:56.553 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 ************************************ 00:17:56.553 END TEST filesystem_xfs 00:17:56.553 ************************************ 00:17:56.553 11:09:04 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:56.553 11:09:04 -- target/filesystem.sh@93 -- # sync 00:17:56.553 11:09:04 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.553 11:09:04 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.553 11:09:04 -- common/autotest_common.sh@1205 -- # local i=0 00:17:56.553 11:09:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:56.553 11:09:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.553 11:09:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.553 11:09:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:56.553 11:09:04 -- common/autotest_common.sh@1217 -- # return 0 00:17:56.553 11:09:04 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.553 11:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.553 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 11:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.553 11:09:04 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:56.553 11:09:04 -- target/filesystem.sh@101 -- # killprocess 66462 00:17:56.553 11:09:04 -- common/autotest_common.sh@936 -- # '[' -z 66462 ']' 00:17:56.553 11:09:04 -- common/autotest_common.sh@940 -- # kill -0 66462 00:17:56.554 11:09:04 -- common/autotest_common.sh@941 -- # uname 00:17:56.554 11:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.554 11:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66462 00:17:56.554 11:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.554 11:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.554 killing process with pid 66462 00:17:56.554 11:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66462' 00:17:56.554 11:09:04 -- common/autotest_common.sh@955 -- # kill 66462 00:17:56.554 11:09:04 -- common/autotest_common.sh@960 -- # wait 66462 00:17:59.084 11:09:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:59.084 00:17:59.084 real 0m12.164s 00:17:59.084 user 0m44.467s 00:17:59.084 sys 0m1.698s 00:17:59.084 11:09:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:59.084 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 ************************************ 00:17:59.084 END TEST nvmf_filesystem_no_in_capsule 00:17:59.084 ************************************ 00:17:59.084 11:09:07 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:59.084 11:09:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:59.084 11:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.084 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:59.342 ************************************ 00:17:59.342 START TEST nvmf_filesystem_in_capsule 00:17:59.342 ************************************ 00:17:59.342 11:09:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:17:59.342 11:09:07 -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:59.342 11:09:07 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:59.342 11:09:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:59.342 11:09:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:59.342 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:59.342 11:09:07 -- nvmf/common.sh@470 -- # nvmfpid=66828 00:17:59.342 11:09:07 -- nvmf/common.sh@471 -- # waitforlisten 66828 00:17:59.342 11:09:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.342 11:09:07 -- common/autotest_common.sh@817 -- # '[' -z 66828 ']' 00:17:59.342 11:09:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.342 11:09:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.342 11:09:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.342 11:09:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.342 11:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:59.342 [2024-04-18 11:09:07.504227] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:59.342 [2024-04-18 11:09:07.504425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.601 [2024-04-18 11:09:07.683963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.858 [2024-04-18 11:09:07.943072] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.858 [2024-04-18 11:09:07.943153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.858 [2024-04-18 11:09:07.943174] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.858 [2024-04-18 11:09:07.943188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.858 [2024-04-18 11:09:07.943202] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.858 [2024-04-18 11:09:07.943426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.858 [2024-04-18 11:09:07.944153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.858 [2024-04-18 11:09:07.944368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.858 [2024-04-18 11:09:07.944415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.425 11:09:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.425 11:09:08 -- common/autotest_common.sh@850 -- # return 0 00:18:00.425 11:09:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:00.425 11:09:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:00.425 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 11:09:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.425 11:09:08 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:18:00.425 11:09:08 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:18:00.425 11:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.425 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 [2024-04-18 11:09:08.433671] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.425 11:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.425 11:09:08 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:18:00.425 11:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.425 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.992 Malloc1 00:18:00.992 11:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.992 11:09:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.992 11:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.992 11:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.992 11:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.992 11:09:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:00.992 11:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.992 11:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.992 11:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.992 11:09:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.992 11:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.992 11:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.992 [2024-04-18 11:09:09.031494] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.993 11:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.993 11:09:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:18:00.993 11:09:09 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:18:00.993 11:09:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:00.993 11:09:09 -- common/autotest_common.sh@1366 -- # local bs 00:18:00.993 11:09:09 -- common/autotest_common.sh@1367 -- # local nb 00:18:00.993 11:09:09 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:18:00.993 11:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.993 11:09:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.993 11:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.993 11:09:09 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:00.993 { 00:18:00.993 "aliases": [ 00:18:00.993 "e0666d29-1f91-4829-82c0-ffde1cb01a06" 00:18:00.993 ], 00:18:00.993 "assigned_rate_limits": { 00:18:00.993 "r_mbytes_per_sec": 0, 00:18:00.993 "rw_ios_per_sec": 0, 00:18:00.993 "rw_mbytes_per_sec": 0, 00:18:00.993 "w_mbytes_per_sec": 0 00:18:00.993 }, 00:18:00.993 "block_size": 512, 00:18:00.993 "claim_type": "exclusive_write", 00:18:00.993 "claimed": true, 00:18:00.993 "driver_specific": {}, 00:18:00.993 "memory_domains": [ 00:18:00.993 { 00:18:00.993 "dma_device_id": "system", 00:18:00.993 "dma_device_type": 1 00:18:00.993 }, 00:18:00.993 { 00:18:00.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.993 "dma_device_type": 2 00:18:00.993 } 00:18:00.993 ], 00:18:00.993 "name": "Malloc1", 00:18:00.993 "num_blocks": 1048576, 00:18:00.993 "product_name": "Malloc disk", 00:18:00.993 "supported_io_types": { 00:18:00.993 "abort": true, 00:18:00.993 "compare": false, 00:18:00.993 "compare_and_write": false, 00:18:00.993 "flush": true, 00:18:00.993 "nvme_admin": false, 00:18:00.993 "nvme_io": false, 00:18:00.993 "read": true, 00:18:00.993 "reset": true, 00:18:00.993 "unmap": true, 00:18:00.993 "write": true, 00:18:00.993 "write_zeroes": true 00:18:00.993 }, 00:18:00.993 "uuid": "e0666d29-1f91-4829-82c0-ffde1cb01a06", 00:18:00.993 "zoned": false 00:18:00.993 } 00:18:00.993 ]' 00:18:00.993 11:09:09 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:00.993 11:09:09 -- common/autotest_common.sh@1369 -- # bs=512 00:18:00.993 11:09:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:00.993 11:09:09 -- common/autotest_common.sh@1370 -- # nb=1048576 00:18:00.993 11:09:09 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:18:00.993 11:09:09 -- common/autotest_common.sh@1374 -- # echo 512 00:18:00.993 11:09:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:18:00.993 11:09:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.251 11:09:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.251 11:09:09 -- common/autotest_common.sh@1184 -- # local i=0 00:18:01.251 11:09:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.251 11:09:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:01.251 11:09:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:03.152 11:09:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:03.152 11:09:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:03.152 11:09:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.152 11:09:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:03.152 11:09:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.152 11:09:11 -- common/autotest_common.sh@1194 -- # return 0 00:18:03.152 11:09:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:18:03.152 11:09:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:18:03.152 11:09:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:18:03.152 11:09:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:18:03.152 11:09:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:18:03.152 11:09:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:03.152 11:09:11 -- setup/common.sh@80 -- # echo 536870912 00:18:03.152 11:09:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:18:03.152 11:09:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:18:03.152 11:09:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:18:03.152 11:09:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:18:03.152 11:09:11 -- target/filesystem.sh@69 -- # partprobe 00:18:03.410 11:09:11 -- target/filesystem.sh@70 -- # sleep 1 00:18:04.347 11:09:12 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:18:04.347 11:09:12 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:18:04.347 11:09:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:04.347 11:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.347 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:18:04.347 ************************************ 00:18:04.347 START TEST filesystem_in_capsule_ext4 00:18:04.347 ************************************ 00:18:04.347 11:09:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:18:04.347 11:09:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:18:04.347 11:09:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:04.347 11:09:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:18:04.347 11:09:12 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:18:04.347 11:09:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:04.347 11:09:12 -- common/autotest_common.sh@914 -- # local i=0 00:18:04.347 11:09:12 -- common/autotest_common.sh@915 -- # local force 00:18:04.347 11:09:12 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:18:04.347 11:09:12 -- common/autotest_common.sh@918 -- # force=-F 00:18:04.347 11:09:12 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:18:04.347 mke2fs 1.46.5 (30-Dec-2021) 00:18:04.628 Discarding device blocks: 0/522240 done 00:18:04.628 Creating filesystem with 522240 1k blocks and 130560 inodes 00:18:04.628 Filesystem UUID: 205678ee-79f1-497b-bf95-0feae75c6653 00:18:04.628 Superblock backups stored on blocks: 00:18:04.628 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:18:04.628 00:18:04.628 Allocating group tables: 0/64 done 00:18:04.628 Writing inode tables: 0/64 done 00:18:04.628 Creating journal (8192 blocks): done 00:18:04.628 Writing superblocks and filesystem accounting information: 0/64 done 00:18:04.628 00:18:04.628 11:09:12 -- common/autotest_common.sh@931 -- # return 0 00:18:04.628 11:09:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:04.628 11:09:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:04.628 11:09:12 -- target/filesystem.sh@25 -- # sync 00:18:04.887 11:09:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:04.887 11:09:12 -- target/filesystem.sh@27 -- # sync 00:18:04.887 11:09:12 -- target/filesystem.sh@29 -- # i=0 00:18:04.887 11:09:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:04.887 11:09:12 -- target/filesystem.sh@37 -- # kill -0 66828 00:18:04.887 11:09:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:04.887 11:09:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:04.887 11:09:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:04.887 11:09:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:04.887 ************************************ 00:18:04.887 END TEST filesystem_in_capsule_ext4 00:18:04.887 ************************************ 00:18:04.887 00:18:04.887 real 0m0.379s 00:18:04.887 user 0m0.030s 00:18:04.887 sys 0m0.050s 00:18:04.887 11:09:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.887 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 11:09:12 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:18:04.887 11:09:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:04.887 11:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.887 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 ************************************ 00:18:04.887 START TEST filesystem_in_capsule_btrfs 00:18:04.887 ************************************ 00:18:04.887 11:09:13 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:18:04.887 11:09:13 -- target/filesystem.sh@18 -- # fstype=btrfs 00:18:04.887 11:09:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:04.887 11:09:13 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:18:04.887 11:09:13 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:18:04.887 11:09:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:04.887 11:09:13 -- common/autotest_common.sh@914 -- # local i=0 00:18:04.887 11:09:13 -- common/autotest_common.sh@915 -- # local force 00:18:04.887 11:09:13 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:18:04.887 11:09:13 -- common/autotest_common.sh@920 -- # force=-f 00:18:04.887 11:09:13 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:18:05.145 btrfs-progs v6.6.2 00:18:05.145 See https://btrfs.readthedocs.io for more information. 00:18:05.145 00:18:05.145 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:18:05.145 NOTE: several default settings have changed in version 5.15, please make sure 00:18:05.145 this does not affect your deployments: 00:18:05.145 - DUP for metadata (-m dup) 00:18:05.145 - enabled no-holes (-O no-holes) 00:18:05.145 - enabled free-space-tree (-R free-space-tree) 00:18:05.145 00:18:05.145 Label: (null) 00:18:05.145 UUID: 5973393d-3413-4443-8964-345906ffedc9 00:18:05.145 Node size: 16384 00:18:05.145 Sector size: 4096 00:18:05.145 Filesystem size: 510.00MiB 00:18:05.145 Block group profiles: 00:18:05.145 Data: single 8.00MiB 00:18:05.145 Metadata: DUP 32.00MiB 00:18:05.145 System: DUP 8.00MiB 00:18:05.145 SSD detected: yes 00:18:05.145 Zoned device: no 00:18:05.145 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:18:05.145 Runtime features: free-space-tree 00:18:05.145 Checksum: crc32c 00:18:05.145 Number of devices: 1 00:18:05.145 Devices: 00:18:05.145 ID SIZE PATH 00:18:05.145 1 510.00MiB /dev/nvme0n1p1 00:18:05.145 00:18:05.145 11:09:13 -- common/autotest_common.sh@931 -- # return 0 00:18:05.145 11:09:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:05.145 11:09:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:05.145 11:09:13 -- target/filesystem.sh@25 -- # sync 00:18:05.145 11:09:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:05.145 11:09:13 -- target/filesystem.sh@27 -- # sync 00:18:05.145 11:09:13 -- target/filesystem.sh@29 -- # i=0 00:18:05.145 11:09:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:05.145 11:09:13 -- target/filesystem.sh@37 -- # kill -0 66828 00:18:05.145 11:09:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:05.145 11:09:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:05.145 11:09:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:05.145 11:09:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:05.145 ************************************ 00:18:05.145 END TEST filesystem_in_capsule_btrfs 00:18:05.145 ************************************ 00:18:05.145 00:18:05.145 real 0m0.288s 00:18:05.145 user 0m0.027s 00:18:05.145 sys 0m0.069s 00:18:05.145 11:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:05.145 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.145 11:09:13 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:18:05.145 11:09:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:05.145 11:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.145 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 ************************************ 00:18:05.402 START TEST filesystem_in_capsule_xfs 00:18:05.402 ************************************ 00:18:05.402 11:09:13 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:18:05.402 11:09:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:18:05.402 11:09:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:05.402 11:09:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:18:05.402 11:09:13 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:18:05.402 11:09:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:05.403 11:09:13 -- common/autotest_common.sh@914 -- # local i=0 00:18:05.403 11:09:13 -- common/autotest_common.sh@915 -- # local force 00:18:05.403 11:09:13 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:18:05.403 11:09:13 -- common/autotest_common.sh@920 -- # force=-f 00:18:05.403 11:09:13 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:18:05.403 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:18:05.403 = sectsz=512 attr=2, projid32bit=1 00:18:05.403 = crc=1 finobt=1, sparse=1, rmapbt=0 00:18:05.403 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:18:05.403 data = bsize=4096 blocks=130560, imaxpct=25 00:18:05.403 = sunit=0 swidth=0 blks 00:18:05.403 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:18:05.403 log =internal log bsize=4096 blocks=16384, version=2 00:18:05.403 = sectsz=512 sunit=0 blks, lazy-count=1 00:18:05.403 realtime =none extsz=4096 blocks=0, rtextents=0 00:18:06.337 Discarding blocks...Done. 00:18:06.337 11:09:14 -- common/autotest_common.sh@931 -- # return 0 00:18:06.337 11:09:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:08.237 11:09:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:08.237 11:09:16 -- target/filesystem.sh@25 -- # sync 00:18:08.237 11:09:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:08.237 11:09:16 -- target/filesystem.sh@27 -- # sync 00:18:08.237 11:09:16 -- target/filesystem.sh@29 -- # i=0 00:18:08.237 11:09:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:08.237 11:09:16 -- target/filesystem.sh@37 -- # kill -0 66828 00:18:08.237 11:09:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:08.237 11:09:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:08.237 11:09:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:08.237 11:09:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:08.237 ************************************ 00:18:08.237 END TEST filesystem_in_capsule_xfs 00:18:08.237 ************************************ 00:18:08.237 00:18:08.237 real 0m2.653s 00:18:08.237 user 0m0.025s 00:18:08.237 sys 0m0.054s 00:18:08.237 11:09:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.237 11:09:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.237 11:09:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:18:08.237 11:09:16 -- target/filesystem.sh@93 -- # sync 00:18:08.237 11:09:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.237 11:09:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:08.237 11:09:16 -- common/autotest_common.sh@1205 -- # local i=0 00:18:08.237 11:09:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:08.237 11:09:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.237 11:09:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:08.237 11:09:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.237 11:09:16 -- common/autotest_common.sh@1217 -- # return 0 00:18:08.237 11:09:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.237 11:09:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.237 11:09:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.237 11:09:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.237 11:09:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:08.237 11:09:16 -- target/filesystem.sh@101 -- # killprocess 66828 00:18:08.237 11:09:16 -- common/autotest_common.sh@936 -- # '[' -z 66828 ']' 00:18:08.237 11:09:16 -- common/autotest_common.sh@940 -- # kill -0 66828 00:18:08.237 11:09:16 -- common/autotest_common.sh@941 -- # uname 00:18:08.237 11:09:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.237 11:09:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66828 00:18:08.237 killing process with pid 66828 00:18:08.237 11:09:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.237 11:09:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.237 11:09:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66828' 00:18:08.237 11:09:16 -- common/autotest_common.sh@955 -- # kill 66828 00:18:08.237 11:09:16 -- common/autotest_common.sh@960 -- # wait 66828 00:18:10.778 11:09:18 -- target/filesystem.sh@102 -- # nvmfpid= 00:18:10.778 00:18:10.778 real 0m11.324s 00:18:10.778 user 0m41.478s 00:18:10.778 sys 0m1.644s 00:18:10.778 11:09:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:10.778 ************************************ 00:18:10.778 END TEST nvmf_filesystem_in_capsule 00:18:10.778 ************************************ 00:18:10.778 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.778 11:09:18 -- target/filesystem.sh@108 -- # nvmftestfini 00:18:10.778 11:09:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:10.778 11:09:18 -- nvmf/common.sh@117 -- # sync 00:18:10.778 11:09:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.778 11:09:18 -- nvmf/common.sh@120 -- # set +e 00:18:10.778 11:09:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.778 11:09:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.778 rmmod nvme_tcp 00:18:10.778 rmmod nvme_fabrics 00:18:10.778 rmmod nvme_keyring 00:18:10.778 11:09:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.778 11:09:18 -- nvmf/common.sh@124 -- # set -e 00:18:10.778 11:09:18 -- nvmf/common.sh@125 -- # return 0 00:18:10.778 11:09:18 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:10.778 11:09:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:10.778 11:09:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:10.778 11:09:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:10.778 11:09:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.778 11:09:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.778 11:09:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.778 11:09:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.778 11:09:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.778 11:09:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:10.778 00:18:10.778 real 0m24.461s 00:18:10.778 user 1m26.214s 00:18:10.778 sys 0m3.810s 00:18:10.778 11:09:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:10.778 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.778 ************************************ 00:18:10.778 END TEST nvmf_filesystem 00:18:10.778 ************************************ 00:18:10.778 11:09:18 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:18:10.778 11:09:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:10.778 11:09:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.778 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:18:10.778 ************************************ 00:18:10.778 START TEST nvmf_discovery 00:18:10.778 ************************************ 00:18:10.778 11:09:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:18:11.037 * Looking for test storage... 00:18:11.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.037 11:09:19 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.037 11:09:19 -- nvmf/common.sh@7 -- # uname -s 00:18:11.037 11:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.037 11:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.037 11:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.037 11:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.037 11:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.037 11:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.037 11:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.037 11:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.037 11:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.037 11:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:11.037 11:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:11.037 11:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.037 11:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.037 11:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.037 11:09:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.037 11:09:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.037 11:09:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.037 11:09:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.037 11:09:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.037 11:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.037 11:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.037 11:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.037 11:09:19 -- paths/export.sh@5 -- # export PATH 00:18:11.037 11:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.037 11:09:19 -- nvmf/common.sh@47 -- # : 0 00:18:11.037 11:09:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.037 11:09:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.037 11:09:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.037 11:09:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.037 11:09:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.037 11:09:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.037 11:09:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.037 11:09:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.037 11:09:19 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:18:11.037 11:09:19 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:18:11.037 11:09:19 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:18:11.037 11:09:19 -- target/discovery.sh@15 -- # hash nvme 00:18:11.037 11:09:19 -- target/discovery.sh@20 -- # nvmftestinit 00:18:11.037 11:09:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:11.037 11:09:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.037 11:09:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:11.037 11:09:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:11.037 11:09:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:11.037 11:09:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.037 11:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.037 11:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.037 11:09:19 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:11.037 11:09:19 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:11.037 11:09:19 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.037 11:09:19 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.037 11:09:19 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.037 11:09:19 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:11.037 11:09:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.037 11:09:19 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.037 11:09:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.037 11:09:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.037 11:09:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.037 11:09:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.037 11:09:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.037 11:09:19 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.037 11:09:19 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:11.037 11:09:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:11.037 Cannot find device "nvmf_tgt_br" 00:18:11.037 11:09:19 -- nvmf/common.sh@155 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.037 Cannot find device "nvmf_tgt_br2" 00:18:11.037 11:09:19 -- nvmf/common.sh@156 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:11.037 11:09:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:11.037 Cannot find device "nvmf_tgt_br" 00:18:11.037 11:09:19 -- nvmf/common.sh@158 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:11.037 Cannot find device "nvmf_tgt_br2" 00:18:11.037 11:09:19 -- nvmf/common.sh@159 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:11.037 11:09:19 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:11.037 11:09:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.037 11:09:19 -- nvmf/common.sh@162 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.037 11:09:19 -- nvmf/common.sh@163 -- # true 00:18:11.037 11:09:19 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.037 11:09:19 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.037 11:09:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.037 11:09:19 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.037 11:09:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.037 11:09:19 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.296 11:09:19 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.296 11:09:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.296 11:09:19 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.296 11:09:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:11.296 11:09:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:11.296 11:09:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:11.296 11:09:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:11.296 11:09:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.296 11:09:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.296 11:09:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.296 11:09:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:11.296 11:09:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:11.296 11:09:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.296 11:09:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.296 11:09:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.296 11:09:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.296 11:09:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.296 11:09:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:11.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:11.296 00:18:11.296 --- 10.0.0.2 ping statistics --- 00:18:11.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.296 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:11.296 11:09:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:11.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:11.296 00:18:11.296 --- 10.0.0.3 ping statistics --- 00:18:11.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.296 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:11.296 11:09:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:11.296 00:18:11.296 --- 10.0.0.1 ping statistics --- 00:18:11.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.296 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:11.296 11:09:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.296 11:09:19 -- nvmf/common.sh@422 -- # return 0 00:18:11.296 11:09:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:11.296 11:09:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.296 11:09:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:11.296 11:09:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:11.296 11:09:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.296 11:09:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:11.296 11:09:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:11.296 11:09:19 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:18:11.296 11:09:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:11.296 11:09:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:11.296 11:09:19 -- common/autotest_common.sh@10 -- # set +x 00:18:11.296 11:09:19 -- nvmf/common.sh@470 -- # nvmfpid=67354 00:18:11.296 11:09:19 -- nvmf/common.sh@471 -- # waitforlisten 67354 00:18:11.296 11:09:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.296 11:09:19 -- common/autotest_common.sh@817 -- # '[' -z 67354 ']' 00:18:11.296 11:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.296 11:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.296 11:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.296 11:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.296 11:09:19 -- common/autotest_common.sh@10 -- # set +x 00:18:11.554 [2024-04-18 11:09:19.573586] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:11.554 [2024-04-18 11:09:19.573768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.554 [2024-04-18 11:09:19.749655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.811 [2024-04-18 11:09:20.022993] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.811 [2024-04-18 11:09:20.023065] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.811 [2024-04-18 11:09:20.023089] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.812 [2024-04-18 11:09:20.023122] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.812 [2024-04-18 11:09:20.023141] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.812 [2024-04-18 11:09:20.023333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.812 [2024-04-18 11:09:20.024016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.812 [2024-04-18 11:09:20.024204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.812 [2024-04-18 11:09:20.024246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.378 11:09:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:12.378 11:09:20 -- common/autotest_common.sh@850 -- # return 0 00:18:12.378 11:09:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:12.378 11:09:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:12.378 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 11:09:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.378 11:09:20 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.378 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.378 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 [2024-04-18 11:09:20.506724] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.378 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.378 11:09:20 -- target/discovery.sh@26 -- # seq 1 4 00:18:12.378 11:09:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:12.378 11:09:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:18:12.378 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.378 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 Null1 00:18:12.378 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.378 11:09:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.378 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.378 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.378 11:09:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:18:12.378 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.379 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.379 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.379 11:09:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.379 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.379 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.379 [2024-04-18 11:09:20.576951] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.379 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.379 11:09:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:12.379 11:09:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:18:12.379 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.379 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.379 Null2 00:18:12.379 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.379 11:09:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:12.379 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.379 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.379 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.379 11:09:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:18:12.379 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.379 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:12.639 11:09:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 Null3 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:12.639 11:09:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 Null4 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.639 11:09:20 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:18:12.639 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.639 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.639 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 4420 00:18:12.640 00:18:12.640 Discovery Log Number of Records 6, Generation counter 6 00:18:12.640 =====Discovery Log Entry 0====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: current discovery subsystem 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4420 00:18:12.640 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: explicit discovery connections, duplicate discovery information 00:18:12.640 sectype: none 00:18:12.640 =====Discovery Log Entry 1====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: nvme subsystem 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4420 00:18:12.640 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: none 00:18:12.640 sectype: none 00:18:12.640 =====Discovery Log Entry 2====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: nvme subsystem 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4420 00:18:12.640 subnqn: nqn.2016-06.io.spdk:cnode2 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: none 00:18:12.640 sectype: none 00:18:12.640 =====Discovery Log Entry 3====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: nvme subsystem 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4420 00:18:12.640 subnqn: nqn.2016-06.io.spdk:cnode3 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: none 00:18:12.640 sectype: none 00:18:12.640 =====Discovery Log Entry 4====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: nvme subsystem 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4420 00:18:12.640 subnqn: nqn.2016-06.io.spdk:cnode4 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: none 00:18:12.640 sectype: none 00:18:12.640 =====Discovery Log Entry 5====== 00:18:12.640 trtype: tcp 00:18:12.640 adrfam: ipv4 00:18:12.640 subtype: discovery subsystem referral 00:18:12.640 treq: not required 00:18:12.640 portid: 0 00:18:12.640 trsvcid: 4430 00:18:12.640 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:12.640 traddr: 10.0.0.2 00:18:12.640 eflags: none 00:18:12.640 sectype: none 00:18:12.640 Perform nvmf subsystem discovery via RPC 00:18:12.640 11:09:20 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:18:12.640 11:09:20 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 [2024-04-18 11:09:20.769031] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:12.640 [ 00:18:12.640 { 00:18:12.640 "allow_any_host": true, 00:18:12.640 "hosts": [], 00:18:12.640 "listen_addresses": [ 00:18:12.640 { 00:18:12.640 "adrfam": "IPv4", 00:18:12.640 "traddr": "10.0.0.2", 00:18:12.640 "transport": "TCP", 00:18:12.640 "trsvcid": "4420", 00:18:12.640 "trtype": "TCP" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:12.640 "subtype": "Discovery" 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "allow_any_host": true, 00:18:12.640 "hosts": [], 00:18:12.640 "listen_addresses": [ 00:18:12.640 { 00:18:12.640 "adrfam": "IPv4", 00:18:12.640 "traddr": "10.0.0.2", 00:18:12.640 "transport": "TCP", 00:18:12.640 "trsvcid": "4420", 00:18:12.640 "trtype": "TCP" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "max_cntlid": 65519, 00:18:12.640 "max_namespaces": 32, 00:18:12.640 "min_cntlid": 1, 00:18:12.640 "model_number": "SPDK bdev Controller", 00:18:12.640 "namespaces": [ 00:18:12.640 { 00:18:12.640 "bdev_name": "Null1", 00:18:12.640 "name": "Null1", 00:18:12.640 "nguid": "F2D606227C5C4296A735EC4F0BF78A5A", 00:18:12.640 "nsid": 1, 00:18:12.640 "uuid": "f2d60622-7c5c-4296-a735-ec4f0bf78a5a" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.640 "serial_number": "SPDK00000000000001", 00:18:12.640 "subtype": "NVMe" 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "allow_any_host": true, 00:18:12.640 "hosts": [], 00:18:12.640 "listen_addresses": [ 00:18:12.640 { 00:18:12.640 "adrfam": "IPv4", 00:18:12.640 "traddr": "10.0.0.2", 00:18:12.640 "transport": "TCP", 00:18:12.640 "trsvcid": "4420", 00:18:12.640 "trtype": "TCP" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "max_cntlid": 65519, 00:18:12.640 "max_namespaces": 32, 00:18:12.640 "min_cntlid": 1, 00:18:12.640 "model_number": "SPDK bdev Controller", 00:18:12.640 "namespaces": [ 00:18:12.640 { 00:18:12.640 "bdev_name": "Null2", 00:18:12.640 "name": "Null2", 00:18:12.640 "nguid": "8A03E82CB8CF4D83906AF1E931D5B076", 00:18:12.640 "nsid": 1, 00:18:12.640 "uuid": "8a03e82c-b8cf-4d83-906a-f1e931d5b076" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:12.640 "serial_number": "SPDK00000000000002", 00:18:12.640 "subtype": "NVMe" 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "allow_any_host": true, 00:18:12.640 "hosts": [], 00:18:12.640 "listen_addresses": [ 00:18:12.640 { 00:18:12.640 "adrfam": "IPv4", 00:18:12.640 "traddr": "10.0.0.2", 00:18:12.640 "transport": "TCP", 00:18:12.640 "trsvcid": "4420", 00:18:12.640 "trtype": "TCP" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "max_cntlid": 65519, 00:18:12.640 "max_namespaces": 32, 00:18:12.640 "min_cntlid": 1, 00:18:12.640 "model_number": "SPDK bdev Controller", 00:18:12.640 "namespaces": [ 00:18:12.640 { 00:18:12.640 "bdev_name": "Null3", 00:18:12.640 "name": "Null3", 00:18:12.640 "nguid": "9A65B510AE914878A6823F494C265510", 00:18:12.640 "nsid": 1, 00:18:12.640 "uuid": "9a65b510-ae91-4878-a682-3f494c265510" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:18:12.640 "serial_number": "SPDK00000000000003", 00:18:12.640 "subtype": "NVMe" 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "allow_any_host": true, 00:18:12.640 "hosts": [], 00:18:12.640 "listen_addresses": [ 00:18:12.640 { 00:18:12.640 "adrfam": "IPv4", 00:18:12.640 "traddr": "10.0.0.2", 00:18:12.640 "transport": "TCP", 00:18:12.640 "trsvcid": "4420", 00:18:12.640 "trtype": "TCP" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "max_cntlid": 65519, 00:18:12.640 "max_namespaces": 32, 00:18:12.640 "min_cntlid": 1, 00:18:12.640 "model_number": "SPDK bdev Controller", 00:18:12.640 "namespaces": [ 00:18:12.640 { 00:18:12.640 "bdev_name": "Null4", 00:18:12.640 "name": "Null4", 00:18:12.640 "nguid": "47ADFA22FB29405E985E900585C13063", 00:18:12.640 "nsid": 1, 00:18:12.640 "uuid": "47adfa22-fb29-405e-985e-900585c13063" 00:18:12.640 } 00:18:12.640 ], 00:18:12.640 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:18:12.640 "serial_number": "SPDK00000000000004", 00:18:12.640 "subtype": "NVMe" 00:18:12.640 } 00:18:12.640 ] 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@42 -- # seq 1 4 00:18:12.640 11:09:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:12.640 11:09:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:12.640 11:09:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:12.640 11:09:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:12.640 11:09:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.640 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.640 11:09:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:18:12.640 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.640 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.899 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.899 11:09:20 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:18:12.899 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.899 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.899 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.899 11:09:20 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:18:12.899 11:09:20 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:18:12.899 11:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.899 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.899 11:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.899 11:09:20 -- target/discovery.sh@49 -- # check_bdevs= 00:18:12.899 11:09:20 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:18:12.899 11:09:20 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:18:12.899 11:09:20 -- target/discovery.sh@57 -- # nvmftestfini 00:18:12.899 11:09:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:12.899 11:09:20 -- nvmf/common.sh@117 -- # sync 00:18:12.899 11:09:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.899 11:09:20 -- nvmf/common.sh@120 -- # set +e 00:18:12.899 11:09:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.899 11:09:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.899 rmmod nvme_tcp 00:18:12.899 rmmod nvme_fabrics 00:18:12.899 rmmod nvme_keyring 00:18:12.899 11:09:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.899 11:09:20 -- nvmf/common.sh@124 -- # set -e 00:18:12.899 11:09:20 -- nvmf/common.sh@125 -- # return 0 00:18:12.899 11:09:20 -- nvmf/common.sh@478 -- # '[' -n 67354 ']' 00:18:12.899 11:09:20 -- nvmf/common.sh@479 -- # killprocess 67354 00:18:12.899 11:09:20 -- common/autotest_common.sh@936 -- # '[' -z 67354 ']' 00:18:12.899 11:09:20 -- common/autotest_common.sh@940 -- # kill -0 67354 00:18:12.899 11:09:20 -- common/autotest_common.sh@941 -- # uname 00:18:12.899 11:09:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.899 11:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67354 00:18:12.899 11:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.899 11:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.899 killing process with pid 67354 00:18:12.899 11:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67354' 00:18:12.899 11:09:21 -- common/autotest_common.sh@955 -- # kill 67354 00:18:12.899 [2024-04-18 11:09:21.024053] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:12.899 11:09:21 -- common/autotest_common.sh@960 -- # wait 67354 00:18:14.307 11:09:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:14.307 11:09:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:14.307 11:09:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:14.307 11:09:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.307 11:09:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.307 11:09:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.307 11:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.307 11:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.307 11:09:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.307 00:18:14.307 real 0m3.290s 00:18:14.307 user 0m8.087s 00:18:14.307 sys 0m0.800s 00:18:14.307 11:09:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:14.307 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:14.307 ************************************ 00:18:14.307 END TEST nvmf_discovery 00:18:14.307 ************************************ 00:18:14.307 11:09:22 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:18:14.307 11:09:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:14.307 11:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:14.307 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:14.307 ************************************ 00:18:14.307 START TEST nvmf_referrals 00:18:14.307 ************************************ 00:18:14.307 11:09:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:18:14.307 * Looking for test storage... 00:18:14.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.307 11:09:22 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.307 11:09:22 -- nvmf/common.sh@7 -- # uname -s 00:18:14.307 11:09:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.307 11:09:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.307 11:09:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.307 11:09:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.307 11:09:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.307 11:09:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.307 11:09:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.307 11:09:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.307 11:09:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.307 11:09:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.307 11:09:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:14.307 11:09:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:14.307 11:09:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.307 11:09:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.307 11:09:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.307 11:09:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.307 11:09:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.307 11:09:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.307 11:09:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.307 11:09:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.307 11:09:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.307 11:09:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.307 11:09:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.307 11:09:22 -- paths/export.sh@5 -- # export PATH 00:18:14.307 11:09:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.307 11:09:22 -- nvmf/common.sh@47 -- # : 0 00:18:14.307 11:09:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.307 11:09:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.307 11:09:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.307 11:09:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.307 11:09:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.307 11:09:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.307 11:09:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.307 11:09:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.307 11:09:22 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:18:14.307 11:09:22 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:18:14.307 11:09:22 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:18:14.307 11:09:22 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:18:14.307 11:09:22 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:14.307 11:09:22 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:14.307 11:09:22 -- target/referrals.sh@37 -- # nvmftestinit 00:18:14.307 11:09:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:14.308 11:09:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.308 11:09:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:14.308 11:09:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:14.308 11:09:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:14.308 11:09:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.308 11:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.308 11:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.308 11:09:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:14.308 11:09:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:14.308 11:09:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:14.308 11:09:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:14.308 11:09:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:14.308 11:09:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:14.308 11:09:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.308 11:09:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.308 11:09:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.308 11:09:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.308 11:09:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.308 11:09:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.308 11:09:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.308 11:09:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.308 11:09:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.308 11:09:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.308 11:09:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.308 11:09:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.308 11:09:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:14.308 11:09:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:14.308 Cannot find device "nvmf_tgt_br" 00:18:14.308 11:09:22 -- nvmf/common.sh@155 -- # true 00:18:14.308 11:09:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.308 Cannot find device "nvmf_tgt_br2" 00:18:14.308 11:09:22 -- nvmf/common.sh@156 -- # true 00:18:14.308 11:09:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:14.308 11:09:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:14.566 Cannot find device "nvmf_tgt_br" 00:18:14.566 11:09:22 -- nvmf/common.sh@158 -- # true 00:18:14.566 11:09:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:14.566 Cannot find device "nvmf_tgt_br2" 00:18:14.566 11:09:22 -- nvmf/common.sh@159 -- # true 00:18:14.566 11:09:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:14.566 11:09:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:14.566 11:09:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.566 11:09:22 -- nvmf/common.sh@162 -- # true 00:18:14.567 11:09:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.567 11:09:22 -- nvmf/common.sh@163 -- # true 00:18:14.567 11:09:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.567 11:09:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.567 11:09:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.567 11:09:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.567 11:09:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.567 11:09:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.567 11:09:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.567 11:09:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.567 11:09:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.567 11:09:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.567 11:09:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.567 11:09:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.567 11:09:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:14.567 11:09:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.567 11:09:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.567 11:09:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.567 11:09:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:14.567 11:09:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:14.567 11:09:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.567 11:09:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.567 11:09:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.567 11:09:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.567 11:09:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.826 11:09:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:18:14.826 00:18:14.826 --- 10.0.0.2 ping statistics --- 00:18:14.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.826 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:14.826 11:09:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:14.826 00:18:14.826 --- 10.0.0.3 ping statistics --- 00:18:14.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.826 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:14.826 11:09:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:14.826 00:18:14.826 --- 10.0.0.1 ping statistics --- 00:18:14.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.826 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:14.826 11:09:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.826 11:09:22 -- nvmf/common.sh@422 -- # return 0 00:18:14.826 11:09:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:14.826 11:09:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.826 11:09:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:14.826 11:09:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:14.826 11:09:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.826 11:09:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:14.826 11:09:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:14.826 11:09:22 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:18:14.826 11:09:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:14.826 11:09:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:14.826 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:14.826 11:09:22 -- nvmf/common.sh@470 -- # nvmfpid=67601 00:18:14.826 11:09:22 -- nvmf/common.sh@471 -- # waitforlisten 67601 00:18:14.826 11:09:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.826 11:09:22 -- common/autotest_common.sh@817 -- # '[' -z 67601 ']' 00:18:14.826 11:09:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.826 11:09:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:14.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.826 11:09:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.826 11:09:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:14.826 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:14.826 [2024-04-18 11:09:22.944528] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:14.826 [2024-04-18 11:09:22.944715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.084 [2024-04-18 11:09:23.129842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.342 [2024-04-18 11:09:23.425859] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.342 [2024-04-18 11:09:23.425922] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.342 [2024-04-18 11:09:23.425943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.342 [2024-04-18 11:09:23.425956] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.342 [2024-04-18 11:09:23.425970] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.342 [2024-04-18 11:09:23.426251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.342 [2024-04-18 11:09:23.426330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.342 [2024-04-18 11:09:23.426432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.342 [2024-04-18 11:09:23.426445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.908 11:09:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:15.908 11:09:23 -- common/autotest_common.sh@850 -- # return 0 00:18:15.908 11:09:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:15.908 11:09:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.908 11:09:23 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.908 11:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 [2024-04-18 11:09:23.948305] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.908 11:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:23 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:18:15.908 11:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 [2024-04-18 11:09:23.975908] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:15.908 11:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:23 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:18:15.908 11:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:23 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:18:15.908 11:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:23 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:18:15.908 11:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:24 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:15.908 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:24 -- target/referrals.sh@48 -- # jq length 00:18:15.908 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:24 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:18:15.908 11:09:24 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:18:15.908 11:09:24 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:15.908 11:09:24 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:15.908 11:09:24 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:15.908 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.908 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:15.908 11:09:24 -- target/referrals.sh@21 -- # sort 00:18:15.908 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.908 11:09:24 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:18:15.908 11:09:24 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:18:15.908 11:09:24 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:18:15.908 11:09:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:15.908 11:09:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:15.908 11:09:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:15.908 11:09:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:15.908 11:09:24 -- target/referrals.sh@26 -- # sort 00:18:16.167 11:09:24 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:18:16.167 11:09:24 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:18:16.167 11:09:24 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:18:16.167 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.167 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.167 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.167 11:09:24 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:18:16.167 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.167 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.167 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.167 11:09:24 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:18:16.167 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.167 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.167 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.167 11:09:24 -- target/referrals.sh@56 -- # jq length 00:18:16.167 11:09:24 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:16.167 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.167 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.167 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.167 11:09:24 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:18:16.167 11:09:24 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:18:16.167 11:09:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:16.167 11:09:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:16.167 11:09:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.167 11:09:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:16.167 11:09:24 -- target/referrals.sh@26 -- # sort 00:18:16.425 11:09:24 -- target/referrals.sh@26 -- # echo 00:18:16.425 11:09:24 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:18:16.425 11:09:24 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:18:16.425 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.425 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.425 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.425 11:09:24 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:18:16.425 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.425 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.425 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.425 11:09:24 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:18:16.426 11:09:24 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:16.426 11:09:24 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:16.426 11:09:24 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:16.426 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.426 11:09:24 -- target/referrals.sh@21 -- # sort 00:18:16.426 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.426 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.426 11:09:24 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:18:16.426 11:09:24 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:18:16.426 11:09:24 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:18:16.426 11:09:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:16.426 11:09:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:16.426 11:09:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:16.426 11:09:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.426 11:09:24 -- target/referrals.sh@26 -- # sort 00:18:16.426 11:09:24 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:18:16.426 11:09:24 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:18:16.426 11:09:24 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:18:16.426 11:09:24 -- target/referrals.sh@67 -- # jq -r .subnqn 00:18:16.426 11:09:24 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:18:16.426 11:09:24 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.426 11:09:24 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:18:16.426 11:09:24 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:18:16.426 11:09:24 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:18:16.426 11:09:24 -- target/referrals.sh@68 -- # jq -r .subnqn 00:18:16.426 11:09:24 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:18:16.426 11:09:24 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.426 11:09:24 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:18:16.684 11:09:24 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:18:16.684 11:09:24 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:18:16.684 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.684 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.684 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.684 11:09:24 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:18:16.684 11:09:24 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:16.684 11:09:24 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:16.684 11:09:24 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:16.684 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.684 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.684 11:09:24 -- target/referrals.sh@21 -- # sort 00:18:16.684 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.684 11:09:24 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:18:16.684 11:09:24 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:18:16.684 11:09:24 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:18:16.684 11:09:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:16.684 11:09:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:16.685 11:09:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.685 11:09:24 -- target/referrals.sh@26 -- # sort 00:18:16.685 11:09:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:16.685 11:09:24 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:18:16.685 11:09:24 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:18:16.685 11:09:24 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:18:16.685 11:09:24 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:18:16.685 11:09:24 -- target/referrals.sh@75 -- # jq -r .subnqn 00:18:16.685 11:09:24 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.685 11:09:24 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:18:16.685 11:09:24 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:18:16.685 11:09:24 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:18:16.685 11:09:24 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:18:16.685 11:09:24 -- target/referrals.sh@76 -- # jq -r .subnqn 00:18:16.685 11:09:24 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.685 11:09:24 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:18:16.943 11:09:24 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:18:16.943 11:09:24 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:18:16.943 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.943 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.943 11:09:24 -- target/referrals.sh@82 -- # jq length 00:18:16.943 11:09:24 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:16.943 11:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.943 11:09:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 11:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.943 11:09:25 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:18:16.943 11:09:25 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:18:16.943 11:09:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:16.943 11:09:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:16.943 11:09:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:16.943 11:09:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:16.943 11:09:25 -- target/referrals.sh@26 -- # sort 00:18:16.943 11:09:25 -- target/referrals.sh@26 -- # echo 00:18:16.943 11:09:25 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:18:16.943 11:09:25 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:18:16.943 11:09:25 -- target/referrals.sh@86 -- # nvmftestfini 00:18:16.943 11:09:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:16.943 11:09:25 -- nvmf/common.sh@117 -- # sync 00:18:16.943 11:09:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.943 11:09:25 -- nvmf/common.sh@120 -- # set +e 00:18:16.943 11:09:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.943 11:09:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.943 rmmod nvme_tcp 00:18:16.943 rmmod nvme_fabrics 00:18:17.201 rmmod nvme_keyring 00:18:17.201 11:09:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.201 11:09:25 -- nvmf/common.sh@124 -- # set -e 00:18:17.201 11:09:25 -- nvmf/common.sh@125 -- # return 0 00:18:17.201 11:09:25 -- nvmf/common.sh@478 -- # '[' -n 67601 ']' 00:18:17.201 11:09:25 -- nvmf/common.sh@479 -- # killprocess 67601 00:18:17.201 11:09:25 -- common/autotest_common.sh@936 -- # '[' -z 67601 ']' 00:18:17.201 11:09:25 -- common/autotest_common.sh@940 -- # kill -0 67601 00:18:17.201 11:09:25 -- common/autotest_common.sh@941 -- # uname 00:18:17.201 11:09:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.201 11:09:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67601 00:18:17.201 11:09:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:17.201 killing process with pid 67601 00:18:17.201 11:09:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:17.201 11:09:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67601' 00:18:17.201 11:09:25 -- common/autotest_common.sh@955 -- # kill 67601 00:18:17.201 11:09:25 -- common/autotest_common.sh@960 -- # wait 67601 00:18:18.572 11:09:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:18.572 11:09:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:18.572 11:09:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:18.572 11:09:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.572 11:09:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.572 11:09:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.572 11:09:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.572 11:09:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.572 11:09:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:18.572 ************************************ 00:18:18.572 END TEST nvmf_referrals 00:18:18.572 ************************************ 00:18:18.572 00:18:18.572 real 0m4.074s 00:18:18.572 user 0m11.931s 00:18:18.572 sys 0m1.116s 00:18:18.572 11:09:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:18.572 11:09:26 -- common/autotest_common.sh@10 -- # set +x 00:18:18.572 11:09:26 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:18.572 11:09:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.572 11:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.572 11:09:26 -- common/autotest_common.sh@10 -- # set +x 00:18:18.572 ************************************ 00:18:18.572 START TEST nvmf_connect_disconnect 00:18:18.572 ************************************ 00:18:18.572 11:09:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:18.572 * Looking for test storage... 00:18:18.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.572 11:09:26 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.572 11:09:26 -- nvmf/common.sh@7 -- # uname -s 00:18:18.572 11:09:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.572 11:09:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.572 11:09:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.572 11:09:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.572 11:09:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.572 11:09:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.572 11:09:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.572 11:09:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.572 11:09:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.572 11:09:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.572 11:09:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:18.572 11:09:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:18.572 11:09:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.573 11:09:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.573 11:09:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.573 11:09:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.573 11:09:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.573 11:09:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.573 11:09:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.573 11:09:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.573 11:09:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.573 11:09:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.573 11:09:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.573 11:09:26 -- paths/export.sh@5 -- # export PATH 00:18:18.573 11:09:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.573 11:09:26 -- nvmf/common.sh@47 -- # : 0 00:18:18.573 11:09:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.573 11:09:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.573 11:09:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.573 11:09:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.573 11:09:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.573 11:09:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.573 11:09:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.573 11:09:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.573 11:09:26 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.573 11:09:26 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.573 11:09:26 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:18:18.573 11:09:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:18.573 11:09:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.573 11:09:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.573 11:09:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.573 11:09:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.573 11:09:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.573 11:09:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.573 11:09:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.573 11:09:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:18.573 11:09:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:18.573 11:09:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:18.573 11:09:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:18.573 11:09:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:18.573 11:09:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:18.573 11:09:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.573 11:09:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.573 11:09:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.573 11:09:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:18.573 11:09:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.573 11:09:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.573 11:09:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.573 11:09:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.573 11:09:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.573 11:09:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.573 11:09:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.573 11:09:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.573 11:09:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:18.573 11:09:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:18.573 Cannot find device "nvmf_tgt_br" 00:18:18.573 11:09:26 -- nvmf/common.sh@155 -- # true 00:18:18.573 11:09:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.573 Cannot find device "nvmf_tgt_br2" 00:18:18.573 11:09:26 -- nvmf/common.sh@156 -- # true 00:18:18.573 11:09:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:18.573 11:09:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:18.573 Cannot find device "nvmf_tgt_br" 00:18:18.573 11:09:26 -- nvmf/common.sh@158 -- # true 00:18:18.573 11:09:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:18.573 Cannot find device "nvmf_tgt_br2" 00:18:18.573 11:09:26 -- nvmf/common.sh@159 -- # true 00:18:18.573 11:09:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:18.573 11:09:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:18.830 11:09:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.830 11:09:26 -- nvmf/common.sh@162 -- # true 00:18:18.830 11:09:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.830 11:09:26 -- nvmf/common.sh@163 -- # true 00:18:18.830 11:09:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.830 11:09:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.830 11:09:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.830 11:09:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.830 11:09:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.830 11:09:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.830 11:09:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.830 11:09:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.830 11:09:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.830 11:09:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:18.830 11:09:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:18.830 11:09:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:18.830 11:09:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:18.830 11:09:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.830 11:09:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.830 11:09:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.830 11:09:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:18.830 11:09:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:18.830 11:09:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.830 11:09:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.830 11:09:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.831 11:09:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.831 11:09:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.831 11:09:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:18.831 00:18:18.831 --- 10.0.0.2 ping statistics --- 00:18:18.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.831 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:18.831 11:09:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:18.831 00:18:18.831 --- 10.0.0.3 ping statistics --- 00:18:18.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.831 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:18.831 11:09:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:18.831 00:18:18.831 --- 10.0.0.1 ping statistics --- 00:18:18.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.831 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:18.831 11:09:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.831 11:09:27 -- nvmf/common.sh@422 -- # return 0 00:18:18.831 11:09:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:18.831 11:09:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.831 11:09:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:18.831 11:09:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:18.831 11:09:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.831 11:09:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:18.831 11:09:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:19.088 11:09:27 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:18:19.088 11:09:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:19.089 11:09:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:19.089 11:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:19.089 11:09:27 -- nvmf/common.sh@470 -- # nvmfpid=67929 00:18:19.089 11:09:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.089 11:09:27 -- nvmf/common.sh@471 -- # waitforlisten 67929 00:18:19.089 11:09:27 -- common/autotest_common.sh@817 -- # '[' -z 67929 ']' 00:18:19.089 11:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.089 11:09:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.089 11:09:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.089 11:09:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.089 11:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:19.089 [2024-04-18 11:09:27.161499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:19.089 [2024-04-18 11:09:27.161671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.432 [2024-04-18 11:09:27.333879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.432 [2024-04-18 11:09:27.576799] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.432 [2024-04-18 11:09:27.576873] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.432 [2024-04-18 11:09:27.576896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.432 [2024-04-18 11:09:27.576910] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.432 [2024-04-18 11:09:27.576925] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.432 [2024-04-18 11:09:27.577197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.432 [2024-04-18 11:09:27.577447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.433 [2024-04-18 11:09:27.577477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.433 [2024-04-18 11:09:27.578159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.998 11:09:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.998 11:09:28 -- common/autotest_common.sh@850 -- # return 0 00:18:19.998 11:09:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:19.998 11:09:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:19.998 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:19.998 11:09:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.998 11:09:28 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:18:19.998 11:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.998 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:19.998 [2024-04-18 11:09:28.139767] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.998 11:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.998 11:09:28 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:18:19.998 11:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.998 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 11:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.256 11:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.256 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 11:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.256 11:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.256 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 11:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.256 11:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.256 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.256 [2024-04-18 11:09:28.264610] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.256 11:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:18:20.256 11:09:28 -- target/connect_disconnect.sh@34 -- # set +x 00:18:22.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.697 11:09:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:31.697 11:09:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:31.697 11:09:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:31.697 11:09:39 -- nvmf/common.sh@117 -- # sync 00:18:31.697 11:09:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.697 11:09:39 -- nvmf/common.sh@120 -- # set +e 00:18:31.697 11:09:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.697 11:09:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.697 rmmod nvme_tcp 00:18:31.697 rmmod nvme_fabrics 00:18:31.697 rmmod nvme_keyring 00:18:31.697 11:09:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.697 11:09:39 -- nvmf/common.sh@124 -- # set -e 00:18:31.697 11:09:39 -- nvmf/common.sh@125 -- # return 0 00:18:31.697 11:09:39 -- nvmf/common.sh@478 -- # '[' -n 67929 ']' 00:18:31.697 11:09:39 -- nvmf/common.sh@479 -- # killprocess 67929 00:18:31.697 11:09:39 -- common/autotest_common.sh@936 -- # '[' -z 67929 ']' 00:18:31.697 11:09:39 -- common/autotest_common.sh@940 -- # kill -0 67929 00:18:31.697 11:09:39 -- common/autotest_common.sh@941 -- # uname 00:18:31.697 11:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.697 11:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67929 00:18:31.697 11:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.697 11:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.697 killing process with pid 67929 00:18:31.697 11:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67929' 00:18:31.697 11:09:39 -- common/autotest_common.sh@955 -- # kill 67929 00:18:31.697 11:09:39 -- common/autotest_common.sh@960 -- # wait 67929 00:18:33.071 11:09:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:33.071 11:09:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:33.071 11:09:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:33.071 11:09:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.071 11:09:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.071 11:09:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.071 11:09:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.071 11:09:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.071 11:09:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.071 00:18:33.071 real 0m14.547s 00:18:33.071 user 0m51.903s 00:18:33.071 sys 0m1.844s 00:18:33.071 11:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:33.071 11:09:41 -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 ************************************ 00:18:33.071 END TEST nvmf_connect_disconnect 00:18:33.071 ************************************ 00:18:33.071 11:09:41 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:33.071 11:09:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:33.071 11:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.071 11:09:41 -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 ************************************ 00:18:33.071 START TEST nvmf_multitarget 00:18:33.071 ************************************ 00:18:33.071 11:09:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:33.330 * Looking for test storage... 00:18:33.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:33.330 11:09:41 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.330 11:09:41 -- nvmf/common.sh@7 -- # uname -s 00:18:33.330 11:09:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.330 11:09:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.330 11:09:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.330 11:09:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.330 11:09:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.330 11:09:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.330 11:09:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.330 11:09:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.330 11:09:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.330 11:09:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.330 11:09:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:33.330 11:09:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:33.330 11:09:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.330 11:09:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.330 11:09:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.330 11:09:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.330 11:09:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.330 11:09:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.330 11:09:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.330 11:09:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.330 11:09:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.330 11:09:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.331 11:09:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.331 11:09:41 -- paths/export.sh@5 -- # export PATH 00:18:33.331 11:09:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.331 11:09:41 -- nvmf/common.sh@47 -- # : 0 00:18:33.331 11:09:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.331 11:09:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.331 11:09:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.331 11:09:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.331 11:09:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.331 11:09:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.331 11:09:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.331 11:09:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.331 11:09:41 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:33.331 11:09:41 -- target/multitarget.sh@15 -- # nvmftestinit 00:18:33.331 11:09:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:33.331 11:09:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.331 11:09:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:33.331 11:09:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:33.331 11:09:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:33.331 11:09:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.331 11:09:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.331 11:09:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.331 11:09:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:33.331 11:09:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:33.331 11:09:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:33.331 11:09:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:33.331 11:09:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:33.331 11:09:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:33.331 11:09:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.331 11:09:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.331 11:09:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.331 11:09:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:33.331 11:09:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.331 11:09:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.331 11:09:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.331 11:09:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.331 11:09:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.331 11:09:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.331 11:09:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.331 11:09:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.331 11:09:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:33.331 11:09:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:33.331 Cannot find device "nvmf_tgt_br" 00:18:33.331 11:09:41 -- nvmf/common.sh@155 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.331 Cannot find device "nvmf_tgt_br2" 00:18:33.331 11:09:41 -- nvmf/common.sh@156 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:33.331 11:09:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:33.331 Cannot find device "nvmf_tgt_br" 00:18:33.331 11:09:41 -- nvmf/common.sh@158 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:33.331 Cannot find device "nvmf_tgt_br2" 00:18:33.331 11:09:41 -- nvmf/common.sh@159 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:33.331 11:09:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:33.331 11:09:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.331 11:09:41 -- nvmf/common.sh@162 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.331 11:09:41 -- nvmf/common.sh@163 -- # true 00:18:33.331 11:09:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:33.331 11:09:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:33.331 11:09:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:33.331 11:09:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:33.331 11:09:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:33.331 11:09:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:33.331 11:09:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:33.331 11:09:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:33.590 11:09:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:33.590 11:09:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:33.590 11:09:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:33.590 11:09:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:33.590 11:09:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:33.590 11:09:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:33.590 11:09:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:33.590 11:09:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:33.590 11:09:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:33.590 11:09:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:33.590 11:09:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:33.590 11:09:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:33.590 11:09:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:33.590 11:09:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:33.590 11:09:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:33.590 11:09:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:33.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:33.590 00:18:33.590 --- 10.0.0.2 ping statistics --- 00:18:33.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.590 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:33.590 11:09:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:33.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:33.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:33.590 00:18:33.590 --- 10.0.0.3 ping statistics --- 00:18:33.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.590 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:33.590 11:09:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:33.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:33.590 00:18:33.590 --- 10.0.0.1 ping statistics --- 00:18:33.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.590 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:33.590 11:09:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.590 11:09:41 -- nvmf/common.sh@422 -- # return 0 00:18:33.590 11:09:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:33.590 11:09:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.590 11:09:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:33.590 11:09:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:33.590 11:09:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.590 11:09:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:33.590 11:09:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:33.591 11:09:41 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:33.591 11:09:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:33.591 11:09:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:33.591 11:09:41 -- common/autotest_common.sh@10 -- # set +x 00:18:33.591 11:09:41 -- nvmf/common.sh@470 -- # nvmfpid=68348 00:18:33.591 11:09:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.591 11:09:41 -- nvmf/common.sh@471 -- # waitforlisten 68348 00:18:33.591 11:09:41 -- common/autotest_common.sh@817 -- # '[' -z 68348 ']' 00:18:33.591 11:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.591 11:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.591 11:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.591 11:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.591 11:09:41 -- common/autotest_common.sh@10 -- # set +x 00:18:33.850 [2024-04-18 11:09:41.810452] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:33.850 [2024-04-18 11:09:41.810605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.850 [2024-04-18 11:09:41.983923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.108 [2024-04-18 11:09:42.268040] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.108 [2024-04-18 11:09:42.268130] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.108 [2024-04-18 11:09:42.268165] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.108 [2024-04-18 11:09:42.268180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.108 [2024-04-18 11:09:42.268196] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.108 [2024-04-18 11:09:42.268384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.108 [2024-04-18 11:09:42.268476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.108 [2024-04-18 11:09:42.268652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.108 [2024-04-18 11:09:42.268692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.674 11:09:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.674 11:09:42 -- common/autotest_common.sh@850 -- # return 0 00:18:34.674 11:09:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.674 11:09:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.674 11:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:34.674 11:09:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.674 11:09:42 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:34.674 11:09:42 -- target/multitarget.sh@21 -- # jq length 00:18:34.674 11:09:42 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:34.932 11:09:42 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:34.933 11:09:42 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:34.933 "nvmf_tgt_1" 00:18:34.933 11:09:43 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:35.190 "nvmf_tgt_2" 00:18:35.190 11:09:43 -- target/multitarget.sh@28 -- # jq length 00:18:35.190 11:09:43 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:35.190 11:09:43 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:35.190 11:09:43 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:35.190 true 00:18:35.190 11:09:43 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:35.448 true 00:18:35.448 11:09:43 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:35.448 11:09:43 -- target/multitarget.sh@35 -- # jq length 00:18:35.448 11:09:43 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:35.448 11:09:43 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:35.448 11:09:43 -- target/multitarget.sh@41 -- # nvmftestfini 00:18:35.448 11:09:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:35.448 11:09:43 -- nvmf/common.sh@117 -- # sync 00:18:35.706 11:09:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.706 11:09:43 -- nvmf/common.sh@120 -- # set +e 00:18:35.706 11:09:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.706 11:09:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.706 rmmod nvme_tcp 00:18:35.706 rmmod nvme_fabrics 00:18:35.706 rmmod nvme_keyring 00:18:35.706 11:09:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.706 11:09:43 -- nvmf/common.sh@124 -- # set -e 00:18:35.706 11:09:43 -- nvmf/common.sh@125 -- # return 0 00:18:35.706 11:09:43 -- nvmf/common.sh@478 -- # '[' -n 68348 ']' 00:18:35.706 11:09:43 -- nvmf/common.sh@479 -- # killprocess 68348 00:18:35.706 11:09:43 -- common/autotest_common.sh@936 -- # '[' -z 68348 ']' 00:18:35.706 11:09:43 -- common/autotest_common.sh@940 -- # kill -0 68348 00:18:35.706 11:09:43 -- common/autotest_common.sh@941 -- # uname 00:18:35.706 11:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.706 11:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68348 00:18:35.706 11:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.706 11:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.706 11:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68348' 00:18:35.706 killing process with pid 68348 00:18:35.706 11:09:43 -- common/autotest_common.sh@955 -- # kill 68348 00:18:35.706 11:09:43 -- common/autotest_common.sh@960 -- # wait 68348 00:18:37.082 11:09:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:37.082 11:09:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:37.082 11:09:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:37.082 11:09:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.082 11:09:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.082 11:09:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.082 11:09:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.082 11:09:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.082 11:09:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:37.082 00:18:37.082 real 0m3.788s 00:18:37.082 user 0m10.937s 00:18:37.082 sys 0m0.845s 00:18:37.082 11:09:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:37.082 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:37.082 ************************************ 00:18:37.082 END TEST nvmf_multitarget 00:18:37.083 ************************************ 00:18:37.083 11:09:45 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:37.083 11:09:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.083 11:09:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.083 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:37.083 ************************************ 00:18:37.083 START TEST nvmf_rpc 00:18:37.083 ************************************ 00:18:37.083 11:09:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:37.083 * Looking for test storage... 00:18:37.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.083 11:09:45 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.083 11:09:45 -- nvmf/common.sh@7 -- # uname -s 00:18:37.083 11:09:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.083 11:09:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.083 11:09:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.083 11:09:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.083 11:09:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.083 11:09:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.083 11:09:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.083 11:09:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.083 11:09:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.083 11:09:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:37.083 11:09:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:37.083 11:09:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.083 11:09:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.083 11:09:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.083 11:09:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.083 11:09:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.083 11:09:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.083 11:09:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.083 11:09:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.083 11:09:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.083 11:09:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.083 11:09:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.083 11:09:45 -- paths/export.sh@5 -- # export PATH 00:18:37.083 11:09:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.083 11:09:45 -- nvmf/common.sh@47 -- # : 0 00:18:37.083 11:09:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.083 11:09:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.083 11:09:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.083 11:09:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.083 11:09:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.083 11:09:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.083 11:09:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.083 11:09:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.083 11:09:45 -- target/rpc.sh@11 -- # loops=5 00:18:37.083 11:09:45 -- target/rpc.sh@23 -- # nvmftestinit 00:18:37.083 11:09:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:37.083 11:09:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.083 11:09:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:37.083 11:09:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:37.083 11:09:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:37.083 11:09:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.083 11:09:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.083 11:09:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.083 11:09:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:37.083 11:09:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:37.083 11:09:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.083 11:09:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.083 11:09:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.083 11:09:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:37.083 11:09:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.083 11:09:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.083 11:09:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.083 11:09:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.083 11:09:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.083 11:09:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.083 11:09:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.083 11:09:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.083 11:09:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:37.083 11:09:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:37.083 Cannot find device "nvmf_tgt_br" 00:18:37.083 11:09:45 -- nvmf/common.sh@155 -- # true 00:18:37.083 11:09:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.083 Cannot find device "nvmf_tgt_br2" 00:18:37.083 11:09:45 -- nvmf/common.sh@156 -- # true 00:18:37.083 11:09:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:37.083 11:09:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:37.083 Cannot find device "nvmf_tgt_br" 00:18:37.083 11:09:45 -- nvmf/common.sh@158 -- # true 00:18:37.083 11:09:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:37.342 Cannot find device "nvmf_tgt_br2" 00:18:37.342 11:09:45 -- nvmf/common.sh@159 -- # true 00:18:37.342 11:09:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:37.342 11:09:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:37.342 11:09:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.342 11:09:45 -- nvmf/common.sh@162 -- # true 00:18:37.342 11:09:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.342 11:09:45 -- nvmf/common.sh@163 -- # true 00:18:37.342 11:09:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.342 11:09:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.342 11:09:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.342 11:09:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.342 11:09:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.342 11:09:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.342 11:09:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.342 11:09:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.342 11:09:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.342 11:09:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:37.342 11:09:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:37.342 11:09:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:37.342 11:09:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:37.342 11:09:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.342 11:09:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.342 11:09:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.342 11:09:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:37.342 11:09:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:37.342 11:09:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.342 11:09:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.342 11:09:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.601 11:09:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.601 11:09:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.601 11:09:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:37.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:37.601 00:18:37.601 --- 10.0.0.2 ping statistics --- 00:18:37.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.601 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:37.601 11:09:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:37.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:37.601 00:18:37.601 --- 10.0.0.3 ping statistics --- 00:18:37.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.601 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:37.601 11:09:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:37.601 00:18:37.601 --- 10.0.0.1 ping statistics --- 00:18:37.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.601 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:37.601 11:09:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.601 11:09:45 -- nvmf/common.sh@422 -- # return 0 00:18:37.601 11:09:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:37.601 11:09:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.601 11:09:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:37.601 11:09:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:37.601 11:09:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.601 11:09:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:37.601 11:09:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:37.601 11:09:45 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:37.601 11:09:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:37.601 11:09:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:37.601 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:37.601 11:09:45 -- nvmf/common.sh@470 -- # nvmfpid=68600 00:18:37.601 11:09:45 -- nvmf/common.sh@471 -- # waitforlisten 68600 00:18:37.602 11:09:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:37.602 11:09:45 -- common/autotest_common.sh@817 -- # '[' -z 68600 ']' 00:18:37.602 11:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.602 11:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.602 11:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.602 11:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.602 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:37.602 [2024-04-18 11:09:45.745753] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:37.602 [2024-04-18 11:09:45.745963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.860 [2024-04-18 11:09:45.928012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.118 [2024-04-18 11:09:46.224049] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.118 [2024-04-18 11:09:46.224153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.118 [2024-04-18 11:09:46.224190] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.118 [2024-04-18 11:09:46.224206] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.118 [2024-04-18 11:09:46.224223] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.118 [2024-04-18 11:09:46.224452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.118 [2024-04-18 11:09:46.225459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.118 [2024-04-18 11:09:46.225643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.118 [2024-04-18 11:09:46.225690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.684 11:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.684 11:09:46 -- common/autotest_common.sh@850 -- # return 0 00:18:38.684 11:09:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:38.684 11:09:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:38.684 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.684 11:09:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.684 11:09:46 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:38.684 11:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.684 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.684 11:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.684 11:09:46 -- target/rpc.sh@26 -- # stats='{ 00:18:38.684 "poll_groups": [ 00:18:38.685 { 00:18:38.685 "admin_qpairs": 0, 00:18:38.685 "completed_nvme_io": 0, 00:18:38.685 "current_admin_qpairs": 0, 00:18:38.685 "current_io_qpairs": 0, 00:18:38.685 "io_qpairs": 0, 00:18:38.685 "name": "nvmf_tgt_poll_group_0", 00:18:38.685 "pending_bdev_io": 0, 00:18:38.685 "transports": [] 00:18:38.685 }, 00:18:38.685 { 00:18:38.685 "admin_qpairs": 0, 00:18:38.685 "completed_nvme_io": 0, 00:18:38.685 "current_admin_qpairs": 0, 00:18:38.685 "current_io_qpairs": 0, 00:18:38.685 "io_qpairs": 0, 00:18:38.685 "name": "nvmf_tgt_poll_group_1", 00:18:38.685 "pending_bdev_io": 0, 00:18:38.685 "transports": [] 00:18:38.685 }, 00:18:38.685 { 00:18:38.685 "admin_qpairs": 0, 00:18:38.685 "completed_nvme_io": 0, 00:18:38.685 "current_admin_qpairs": 0, 00:18:38.685 "current_io_qpairs": 0, 00:18:38.685 "io_qpairs": 0, 00:18:38.685 "name": "nvmf_tgt_poll_group_2", 00:18:38.685 "pending_bdev_io": 0, 00:18:38.685 "transports": [] 00:18:38.685 }, 00:18:38.685 { 00:18:38.685 "admin_qpairs": 0, 00:18:38.685 "completed_nvme_io": 0, 00:18:38.685 "current_admin_qpairs": 0, 00:18:38.685 "current_io_qpairs": 0, 00:18:38.685 "io_qpairs": 0, 00:18:38.685 "name": "nvmf_tgt_poll_group_3", 00:18:38.685 "pending_bdev_io": 0, 00:18:38.685 "transports": [] 00:18:38.685 } 00:18:38.685 ], 00:18:38.685 "tick_rate": 2200000000 00:18:38.685 }' 00:18:38.685 11:09:46 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:38.685 11:09:46 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:38.685 11:09:46 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:38.685 11:09:46 -- target/rpc.sh@15 -- # wc -l 00:18:38.685 11:09:46 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:38.685 11:09:46 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:38.685 11:09:46 -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:38.685 11:09:46 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.685 11:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.685 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.685 [2024-04-18 11:09:46.885591] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.943 11:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.943 11:09:46 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:38.943 11:09:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.943 11:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.943 11:09:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.943 11:09:46 -- target/rpc.sh@33 -- # stats='{ 00:18:38.943 "poll_groups": [ 00:18:38.943 { 00:18:38.943 "admin_qpairs": 0, 00:18:38.943 "completed_nvme_io": 0, 00:18:38.943 "current_admin_qpairs": 0, 00:18:38.943 "current_io_qpairs": 0, 00:18:38.943 "io_qpairs": 0, 00:18:38.943 "name": "nvmf_tgt_poll_group_0", 00:18:38.943 "pending_bdev_io": 0, 00:18:38.943 "transports": [ 00:18:38.943 { 00:18:38.943 "trtype": "TCP" 00:18:38.943 } 00:18:38.943 ] 00:18:38.943 }, 00:18:38.943 { 00:18:38.943 "admin_qpairs": 0, 00:18:38.943 "completed_nvme_io": 0, 00:18:38.943 "current_admin_qpairs": 0, 00:18:38.943 "current_io_qpairs": 0, 00:18:38.943 "io_qpairs": 0, 00:18:38.943 "name": "nvmf_tgt_poll_group_1", 00:18:38.943 "pending_bdev_io": 0, 00:18:38.943 "transports": [ 00:18:38.943 { 00:18:38.943 "trtype": "TCP" 00:18:38.943 } 00:18:38.943 ] 00:18:38.943 }, 00:18:38.943 { 00:18:38.943 "admin_qpairs": 0, 00:18:38.943 "completed_nvme_io": 0, 00:18:38.943 "current_admin_qpairs": 0, 00:18:38.943 "current_io_qpairs": 0, 00:18:38.943 "io_qpairs": 0, 00:18:38.943 "name": "nvmf_tgt_poll_group_2", 00:18:38.943 "pending_bdev_io": 0, 00:18:38.943 "transports": [ 00:18:38.943 { 00:18:38.943 "trtype": "TCP" 00:18:38.943 } 00:18:38.943 ] 00:18:38.943 }, 00:18:38.943 { 00:18:38.943 "admin_qpairs": 0, 00:18:38.943 "completed_nvme_io": 0, 00:18:38.943 "current_admin_qpairs": 0, 00:18:38.943 "current_io_qpairs": 0, 00:18:38.943 "io_qpairs": 0, 00:18:38.943 "name": "nvmf_tgt_poll_group_3", 00:18:38.943 "pending_bdev_io": 0, 00:18:38.943 "transports": [ 00:18:38.943 { 00:18:38.943 "trtype": "TCP" 00:18:38.943 } 00:18:38.943 ] 00:18:38.943 } 00:18:38.943 ], 00:18:38.943 "tick_rate": 2200000000 00:18:38.943 }' 00:18:38.943 11:09:46 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.943 11:09:46 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:38.943 11:09:46 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:38.943 11:09:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.943 11:09:47 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:38.943 11:09:47 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:38.943 11:09:47 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:38.943 11:09:47 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:38.943 11:09:47 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:38.943 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.943 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.943 Malloc1 00:18:38.943 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.944 11:09:47 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:38.944 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.944 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.944 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.944 11:09:47 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:38.944 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.944 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.944 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.944 11:09:47 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:38.944 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.944 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:39.203 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.203 11:09:47 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.203 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.203 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:39.203 [2024-04-18 11:09:47.170917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.203 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.203 11:09:47 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.2 -s 4420 00:18:39.203 11:09:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:39.203 11:09:47 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.2 -s 4420 00:18:39.203 11:09:47 -- common/autotest_common.sh@626 -- # local arg=nvme 00:18:39.203 11:09:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:39.203 11:09:47 -- common/autotest_common.sh@630 -- # type -t nvme 00:18:39.203 11:09:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:39.203 11:09:47 -- common/autotest_common.sh@632 -- # type -P nvme 00:18:39.203 11:09:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:39.203 11:09:47 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:18:39.203 11:09:47 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:18:39.203 11:09:47 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.2 -s 4420 00:18:39.203 [2024-04-18 11:09:47.189631] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967' 00:18:39.203 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:39.203 could not add new controller: failed to write to nvme-fabrics device 00:18:39.203 11:09:47 -- common/autotest_common.sh@641 -- # es=1 00:18:39.203 11:09:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:39.203 11:09:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:39.203 11:09:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:39.203 11:09:47 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:39.203 11:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.203 11:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:39.203 11:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.203 11:09:47 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:39.203 11:09:47 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:39.203 11:09:47 -- common/autotest_common.sh@1184 -- # local i=0 00:18:39.203 11:09:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.203 11:09:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:39.203 11:09:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:41.731 11:09:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:41.731 11:09:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:41.731 11:09:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.731 11:09:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:41.731 11:09:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.731 11:09:49 -- common/autotest_common.sh@1194 -- # return 0 00:18:41.731 11:09:49 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:41.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.731 11:09:49 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:41.731 11:09:49 -- common/autotest_common.sh@1205 -- # local i=0 00:18:41.731 11:09:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:41.731 11:09:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.731 11:09:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.731 11:09:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:41.731 11:09:49 -- common/autotest_common.sh@1217 -- # return 0 00:18:41.731 11:09:49 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:41.731 11:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.731 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 11:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.731 11:09:49 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.731 11:09:49 -- common/autotest_common.sh@638 -- # local es=0 00:18:41.731 11:09:49 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.731 11:09:49 -- common/autotest_common.sh@626 -- # local arg=nvme 00:18:41.731 11:09:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:41.731 11:09:49 -- common/autotest_common.sh@630 -- # type -t nvme 00:18:41.731 11:09:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:41.731 11:09:49 -- common/autotest_common.sh@632 -- # type -P nvme 00:18:41.731 11:09:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:41.731 11:09:49 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:18:41.731 11:09:49 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:18:41.731 11:09:49 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.731 [2024-04-18 11:09:49.501806] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967' 00:18:41.731 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:41.731 could not add new controller: failed to write to nvme-fabrics device 00:18:41.731 11:09:49 -- common/autotest_common.sh@641 -- # es=1 00:18:41.731 11:09:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:41.731 11:09:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:41.731 11:09:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:41.731 11:09:49 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:41.731 11:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.731 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:18:41.731 11:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.731 11:09:49 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.731 11:09:49 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:41.731 11:09:49 -- common/autotest_common.sh@1184 -- # local i=0 00:18:41.731 11:09:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.731 11:09:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:41.731 11:09:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:43.631 11:09:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:43.631 11:09:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:43.631 11:09:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.631 11:09:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:43.631 11:09:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.631 11:09:51 -- common/autotest_common.sh@1194 -- # return 0 00:18:43.631 11:09:51 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.889 11:09:51 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.889 11:09:51 -- common/autotest_common.sh@1205 -- # local i=0 00:18:43.889 11:09:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:43.889 11:09:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.889 11:09:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:43.889 11:09:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.889 11:09:51 -- common/autotest_common.sh@1217 -- # return 0 00:18:43.889 11:09:51 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.889 11:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.889 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 11:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.889 11:09:51 -- target/rpc.sh@81 -- # seq 1 5 00:18:43.889 11:09:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:43.889 11:09:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:43.889 11:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.889 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 11:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.889 11:09:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.889 11:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.889 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 [2024-04-18 11:09:51.915758] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.889 11:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.889 11:09:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:43.889 11:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.889 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 11:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.889 11:09:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:43.889 11:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.889 11:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 11:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.889 11:09:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:44.147 11:09:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:44.147 11:09:52 -- common/autotest_common.sh@1184 -- # local i=0 00:18:44.147 11:09:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.147 11:09:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:44.147 11:09:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:46.043 11:09:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:46.043 11:09:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:46.043 11:09:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.043 11:09:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:46.043 11:09:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.043 11:09:54 -- common/autotest_common.sh@1194 -- # return 0 00:18:46.043 11:09:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.301 11:09:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.301 11:09:54 -- common/autotest_common.sh@1205 -- # local i=0 00:18:46.301 11:09:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:46.301 11:09:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.301 11:09:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:46.301 11:09:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.301 11:09:54 -- common/autotest_common.sh@1217 -- # return 0 00:18:46.301 11:09:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:46.301 11:09:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 [2024-04-18 11:09:54.339491] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:46.301 11:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.301 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:46.301 11:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.301 11:09:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.609 11:09:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:46.609 11:09:54 -- common/autotest_common.sh@1184 -- # local i=0 00:18:46.609 11:09:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.609 11:09:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:46.609 11:09:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:48.510 11:09:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:48.510 11:09:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:48.510 11:09:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.510 11:09:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:48.510 11:09:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.510 11:09:56 -- common/autotest_common.sh@1194 -- # return 0 00:18:48.510 11:09:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.510 11:09:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:48.510 11:09:56 -- common/autotest_common.sh@1205 -- # local i=0 00:18:48.510 11:09:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.510 11:09:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:48.510 11:09:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:48.510 11:09:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.510 11:09:56 -- common/autotest_common.sh@1217 -- # return 0 00:18:48.510 11:09:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:48.510 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.510 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.510 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.510 11:09:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.510 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.510 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.769 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.769 11:09:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:48.769 11:09:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:48.769 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.769 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.769 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.769 11:09:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.769 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.769 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.769 [2024-04-18 11:09:56.748529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.769 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.769 11:09:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:48.769 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.769 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.769 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.769 11:09:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:48.769 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.769 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.769 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.769 11:09:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.769 11:09:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.769 11:09:56 -- common/autotest_common.sh@1184 -- # local i=0 00:18:48.769 11:09:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.769 11:09:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:48.769 11:09:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:50.728 11:09:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:50.728 11:09:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:50.728 11:09:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.988 11:09:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:50.988 11:09:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.988 11:09:58 -- common/autotest_common.sh@1194 -- # return 0 00:18:50.988 11:09:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.988 11:09:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.988 11:09:58 -- common/autotest_common.sh@1205 -- # local i=0 00:18:50.988 11:09:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:50.988 11:09:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.988 11:09:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:50.988 11:09:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.988 11:09:59 -- common/autotest_common.sh@1217 -- # return 0 00:18:50.988 11:09:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:50.988 11:09:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 [2024-04-18 11:09:59.043400] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:50.988 11:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.988 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 11:09:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.988 11:09:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.246 11:09:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.246 11:09:59 -- common/autotest_common.sh@1184 -- # local i=0 00:18:51.246 11:09:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.246 11:09:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:51.246 11:09:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:53.144 11:10:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:53.144 11:10:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:53.144 11:10:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.144 11:10:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:53.144 11:10:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.144 11:10:01 -- common/autotest_common.sh@1194 -- # return 0 00:18:53.144 11:10:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.144 11:10:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.144 11:10:01 -- common/autotest_common.sh@1205 -- # local i=0 00:18:53.144 11:10:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:53.144 11:10:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.144 11:10:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:53.144 11:10:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.144 11:10:01 -- common/autotest_common.sh@1217 -- # return 0 00:18:53.144 11:10:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:53.144 11:10:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 [2024-04-18 11:10:01.329486] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:53.144 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.144 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.144 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.144 11:10:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:53.401 11:10:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:53.402 11:10:01 -- common/autotest_common.sh@1184 -- # local i=0 00:18:53.402 11:10:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.402 11:10:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:53.402 11:10:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:55.933 11:10:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:55.933 11:10:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:55.933 11:10:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:55.933 11:10:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.933 11:10:03 -- common/autotest_common.sh@1194 -- # return 0 00:18:55.933 11:10:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:55.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.933 11:10:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@1205 -- # local i=0 00:18:55.933 11:10:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:55.933 11:10:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:55.933 11:10:03 -- common/autotest_common.sh@1217 -- # return 0 00:18:55.933 11:10:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@99 -- # seq 1 5 00:18:55.933 11:10:03 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:55.933 11:10:03 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 [2024-04-18 11:10:03.637625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:55.933 11:10:03 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 [2024-04-18 11:10:03.685735] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:55.933 11:10:03 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 [2024-04-18 11:10:03.733744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.933 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.933 11:10:03 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:55.933 11:10:03 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.933 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.933 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 [2024-04-18 11:10:03.781895] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:55.934 11:10:03 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 [2024-04-18 11:10:03.830012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:55.934 11:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.934 11:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 11:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.934 11:10:03 -- target/rpc.sh@110 -- # stats='{ 00:18:55.934 "poll_groups": [ 00:18:55.934 { 00:18:55.934 "admin_qpairs": 2, 00:18:55.934 "completed_nvme_io": 65, 00:18:55.934 "current_admin_qpairs": 0, 00:18:55.934 "current_io_qpairs": 0, 00:18:55.934 "io_qpairs": 16, 00:18:55.934 "name": "nvmf_tgt_poll_group_0", 00:18:55.934 "pending_bdev_io": 0, 00:18:55.934 "transports": [ 00:18:55.934 { 00:18:55.934 "trtype": "TCP" 00:18:55.934 } 00:18:55.934 ] 00:18:55.934 }, 00:18:55.934 { 00:18:55.934 "admin_qpairs": 3, 00:18:55.934 "completed_nvme_io": 117, 00:18:55.934 "current_admin_qpairs": 0, 00:18:55.934 "current_io_qpairs": 0, 00:18:55.934 "io_qpairs": 17, 00:18:55.934 "name": "nvmf_tgt_poll_group_1", 00:18:55.934 "pending_bdev_io": 0, 00:18:55.934 "transports": [ 00:18:55.934 { 00:18:55.934 "trtype": "TCP" 00:18:55.934 } 00:18:55.934 ] 00:18:55.934 }, 00:18:55.934 { 00:18:55.934 "admin_qpairs": 1, 00:18:55.934 "completed_nvme_io": 168, 00:18:55.934 "current_admin_qpairs": 0, 00:18:55.934 "current_io_qpairs": 0, 00:18:55.934 "io_qpairs": 19, 00:18:55.934 "name": "nvmf_tgt_poll_group_2", 00:18:55.934 "pending_bdev_io": 0, 00:18:55.934 "transports": [ 00:18:55.934 { 00:18:55.934 "trtype": "TCP" 00:18:55.934 } 00:18:55.934 ] 00:18:55.934 }, 00:18:55.934 { 00:18:55.934 "admin_qpairs": 1, 00:18:55.934 "completed_nvme_io": 70, 00:18:55.934 "current_admin_qpairs": 0, 00:18:55.934 "current_io_qpairs": 0, 00:18:55.934 "io_qpairs": 18, 00:18:55.934 "name": "nvmf_tgt_poll_group_3", 00:18:55.934 "pending_bdev_io": 0, 00:18:55.934 "transports": [ 00:18:55.934 { 00:18:55.934 "trtype": "TCP" 00:18:55.934 } 00:18:55.934 ] 00:18:55.934 } 00:18:55.934 ], 00:18:55.934 "tick_rate": 2200000000 00:18:55.934 }' 00:18:55.934 11:10:03 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:55.934 11:10:03 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:55.934 11:10:03 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:55.934 11:10:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:55.934 11:10:03 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:18:55.934 11:10:03 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:55.934 11:10:03 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:55.934 11:10:03 -- target/rpc.sh@123 -- # nvmftestfini 00:18:55.934 11:10:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:55.934 11:10:03 -- nvmf/common.sh@117 -- # sync 00:18:55.934 11:10:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.934 11:10:04 -- nvmf/common.sh@120 -- # set +e 00:18:55.934 11:10:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.934 11:10:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.934 rmmod nvme_tcp 00:18:55.934 rmmod nvme_fabrics 00:18:55.934 rmmod nvme_keyring 00:18:55.934 11:10:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.934 11:10:04 -- nvmf/common.sh@124 -- # set -e 00:18:55.934 11:10:04 -- nvmf/common.sh@125 -- # return 0 00:18:55.934 11:10:04 -- nvmf/common.sh@478 -- # '[' -n 68600 ']' 00:18:55.934 11:10:04 -- nvmf/common.sh@479 -- # killprocess 68600 00:18:55.934 11:10:04 -- common/autotest_common.sh@936 -- # '[' -z 68600 ']' 00:18:55.934 11:10:04 -- common/autotest_common.sh@940 -- # kill -0 68600 00:18:55.934 11:10:04 -- common/autotest_common.sh@941 -- # uname 00:18:55.934 11:10:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:55.934 11:10:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68600 00:18:55.934 11:10:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:55.934 11:10:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:55.934 killing process with pid 68600 00:18:55.934 11:10:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68600' 00:18:55.934 11:10:04 -- common/autotest_common.sh@955 -- # kill 68600 00:18:55.934 11:10:04 -- common/autotest_common.sh@960 -- # wait 68600 00:18:57.309 11:10:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:57.309 11:10:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:57.309 11:10:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:57.309 11:10:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.309 11:10:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.309 11:10:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.309 11:10:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.309 11:10:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.309 11:10:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:57.309 00:18:57.309 real 0m20.375s 00:18:57.309 user 1m14.881s 00:18:57.309 sys 0m2.302s 00:18:57.309 11:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.309 11:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.309 ************************************ 00:18:57.309 END TEST nvmf_rpc 00:18:57.309 ************************************ 00:18:57.567 11:10:05 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:57.567 11:10:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:57.567 11:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.567 11:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:57.567 ************************************ 00:18:57.567 START TEST nvmf_invalid 00:18:57.567 ************************************ 00:18:57.567 11:10:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:57.567 * Looking for test storage... 00:18:57.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:57.567 11:10:05 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.567 11:10:05 -- nvmf/common.sh@7 -- # uname -s 00:18:57.567 11:10:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.567 11:10:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.567 11:10:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.568 11:10:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.568 11:10:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.568 11:10:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.568 11:10:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.568 11:10:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.568 11:10:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.568 11:10:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.568 11:10:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:57.568 11:10:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:18:57.568 11:10:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.568 11:10:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.568 11:10:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.568 11:10:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.568 11:10:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.568 11:10:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.568 11:10:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.568 11:10:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.568 11:10:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.568 11:10:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.568 11:10:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.568 11:10:05 -- paths/export.sh@5 -- # export PATH 00:18:57.568 11:10:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.568 11:10:05 -- nvmf/common.sh@47 -- # : 0 00:18:57.568 11:10:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.568 11:10:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.568 11:10:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.568 11:10:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.568 11:10:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.568 11:10:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.568 11:10:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.568 11:10:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.568 11:10:05 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:57.568 11:10:05 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.568 11:10:05 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:57.568 11:10:05 -- target/invalid.sh@14 -- # target=foobar 00:18:57.568 11:10:05 -- target/invalid.sh@16 -- # RANDOM=0 00:18:57.568 11:10:05 -- target/invalid.sh@34 -- # nvmftestinit 00:18:57.568 11:10:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:57.568 11:10:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.568 11:10:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:57.568 11:10:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:57.568 11:10:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:57.568 11:10:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.568 11:10:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.568 11:10:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.568 11:10:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:57.568 11:10:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:57.568 11:10:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:57.568 11:10:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:57.568 11:10:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:57.568 11:10:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:57.568 11:10:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.568 11:10:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.568 11:10:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:57.568 11:10:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:57.568 11:10:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:57.568 11:10:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:57.568 11:10:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:57.568 11:10:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.568 11:10:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:57.568 11:10:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:57.568 11:10:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:57.568 11:10:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:57.568 11:10:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:57.568 11:10:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:57.568 Cannot find device "nvmf_tgt_br" 00:18:57.568 11:10:05 -- nvmf/common.sh@155 -- # true 00:18:57.568 11:10:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.568 Cannot find device "nvmf_tgt_br2" 00:18:57.568 11:10:05 -- nvmf/common.sh@156 -- # true 00:18:57.568 11:10:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:57.568 11:10:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:57.568 Cannot find device "nvmf_tgt_br" 00:18:57.568 11:10:05 -- nvmf/common.sh@158 -- # true 00:18:57.568 11:10:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:57.568 Cannot find device "nvmf_tgt_br2" 00:18:57.568 11:10:05 -- nvmf/common.sh@159 -- # true 00:18:57.568 11:10:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:57.826 11:10:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:57.827 11:10:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.827 11:10:05 -- nvmf/common.sh@162 -- # true 00:18:57.827 11:10:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.827 11:10:05 -- nvmf/common.sh@163 -- # true 00:18:57.827 11:10:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.827 11:10:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.827 11:10:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.827 11:10:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.827 11:10:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.827 11:10:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.827 11:10:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.827 11:10:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:57.827 11:10:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:57.827 11:10:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:57.827 11:10:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:57.827 11:10:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:57.827 11:10:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:57.827 11:10:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.827 11:10:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.827 11:10:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.827 11:10:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:57.827 11:10:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:57.827 11:10:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.827 11:10:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.827 11:10:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.827 11:10:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.827 11:10:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.827 11:10:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:57.827 00:18:57.827 --- 10.0.0.2 ping statistics --- 00:18:57.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.827 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:57.827 11:10:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:57.827 00:18:57.827 --- 10.0.0.3 ping statistics --- 00:18:57.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.827 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:57.827 11:10:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:57.827 00:18:57.827 --- 10.0.0.1 ping statistics --- 00:18:57.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.827 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:57.827 11:10:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.827 11:10:06 -- nvmf/common.sh@422 -- # return 0 00:18:57.827 11:10:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:57.827 11:10:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.827 11:10:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:57.827 11:10:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:57.827 11:10:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.827 11:10:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:57.827 11:10:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:58.085 11:10:06 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:58.085 11:10:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:58.085 11:10:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:58.085 11:10:06 -- common/autotest_common.sh@10 -- # set +x 00:18:58.085 11:10:06 -- nvmf/common.sh@470 -- # nvmfpid=69133 00:18:58.085 11:10:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.085 11:10:06 -- nvmf/common.sh@471 -- # waitforlisten 69133 00:18:58.085 11:10:06 -- common/autotest_common.sh@817 -- # '[' -z 69133 ']' 00:18:58.085 11:10:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.086 11:10:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.086 11:10:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.086 11:10:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.086 11:10:06 -- common/autotest_common.sh@10 -- # set +x 00:18:58.086 [2024-04-18 11:10:06.180947] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:58.086 [2024-04-18 11:10:06.181153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.344 [2024-04-18 11:10:06.359079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.602 [2024-04-18 11:10:06.620199] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.602 [2024-04-18 11:10:06.620261] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.602 [2024-04-18 11:10:06.620283] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.602 [2024-04-18 11:10:06.620296] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.602 [2024-04-18 11:10:06.620310] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.603 [2024-04-18 11:10:06.620904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.603 [2024-04-18 11:10:06.621141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.603 [2024-04-18 11:10:06.621640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.603 [2024-04-18 11:10:06.621642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.169 11:10:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.169 11:10:07 -- common/autotest_common.sh@850 -- # return 0 00:18:59.169 11:10:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:59.170 11:10:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:59.170 11:10:07 -- common/autotest_common.sh@10 -- # set +x 00:18:59.170 11:10:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.170 11:10:07 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:59.170 11:10:07 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24033 00:18:59.428 [2024-04-18 11:10:07.436371] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:59.428 11:10:07 -- target/invalid.sh@40 -- # out='2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24033 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:59.428 request: 00:18:59.428 { 00:18:59.428 "method": "nvmf_create_subsystem", 00:18:59.428 "params": { 00:18:59.428 "nqn": "nqn.2016-06.io.spdk:cnode24033", 00:18:59.428 "tgt_name": "foobar" 00:18:59.428 } 00:18:59.428 } 00:18:59.428 Got JSON-RPC error response 00:18:59.428 GoRPCClient: error on JSON-RPC call' 00:18:59.428 11:10:07 -- target/invalid.sh@41 -- # [[ 2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24033 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:59.428 request: 00:18:59.428 { 00:18:59.428 "method": "nvmf_create_subsystem", 00:18:59.428 "params": { 00:18:59.428 "nqn": "nqn.2016-06.io.spdk:cnode24033", 00:18:59.428 "tgt_name": "foobar" 00:18:59.428 } 00:18:59.428 } 00:18:59.428 Got JSON-RPC error response 00:18:59.428 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:59.428 11:10:07 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:59.428 11:10:07 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21834 00:18:59.686 [2024-04-18 11:10:07.672773] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21834: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:59.686 11:10:07 -- target/invalid.sh@45 -- # out='2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21834 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:59.686 request: 00:18:59.686 { 00:18:59.686 "method": "nvmf_create_subsystem", 00:18:59.686 "params": { 00:18:59.686 "nqn": "nqn.2016-06.io.spdk:cnode21834", 00:18:59.686 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:59.686 } 00:18:59.686 } 00:18:59.686 Got JSON-RPC error response 00:18:59.686 GoRPCClient: error on JSON-RPC call' 00:18:59.686 11:10:07 -- target/invalid.sh@46 -- # [[ 2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21834 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:59.686 request: 00:18:59.686 { 00:18:59.686 "method": "nvmf_create_subsystem", 00:18:59.686 "params": { 00:18:59.686 "nqn": "nqn.2016-06.io.spdk:cnode21834", 00:18:59.686 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:59.686 } 00:18:59.686 } 00:18:59.686 Got JSON-RPC error response 00:18:59.686 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:59.686 11:10:07 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:59.686 11:10:07 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7335 00:18:59.944 [2024-04-18 11:10:07.921211] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7335: invalid model number 'SPDK_Controller' 00:18:59.944 11:10:07 -- target/invalid.sh@50 -- # out='2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7335], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:59.944 request: 00:18:59.944 { 00:18:59.944 "method": "nvmf_create_subsystem", 00:18:59.944 "params": { 00:18:59.944 "nqn": "nqn.2016-06.io.spdk:cnode7335", 00:18:59.944 "model_number": "SPDK_Controller\u001f" 00:18:59.944 } 00:18:59.944 } 00:18:59.944 Got JSON-RPC error response 00:18:59.944 GoRPCClient: error on JSON-RPC call' 00:18:59.944 11:10:07 -- target/invalid.sh@51 -- # [[ 2024/04/18 11:10:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7335], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:59.944 request: 00:18:59.944 { 00:18:59.944 "method": "nvmf_create_subsystem", 00:18:59.944 "params": { 00:18:59.944 "nqn": "nqn.2016-06.io.spdk:cnode7335", 00:18:59.944 "model_number": "SPDK_Controller\u001f" 00:18:59.944 } 00:18:59.944 } 00:18:59.944 Got JSON-RPC error response 00:18:59.944 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:59.944 11:10:07 -- target/invalid.sh@54 -- # gen_random_s 21 00:18:59.944 11:10:07 -- target/invalid.sh@19 -- # local length=21 ll 00:18:59.944 11:10:07 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:59.944 11:10:07 -- target/invalid.sh@21 -- # local chars 00:18:59.944 11:10:07 -- target/invalid.sh@22 -- # local string 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 116 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=t 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 77 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=M 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 104 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=h 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 113 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=q 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 65 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=A 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 38 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+='&' 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 105 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=i 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 104 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=h 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 49 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=1 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # printf %x 99 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:59.944 11:10:07 -- target/invalid.sh@25 -- # string+=c 00:18:59.944 11:10:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 78 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=N 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 93 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=']' 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 69 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=E 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 118 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=v 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 106 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=j 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 104 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=h 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 110 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=n 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 53 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=5 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 36 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+='$' 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 107 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=k 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # printf %x 44 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:59.944 11:10:08 -- target/invalid.sh@25 -- # string+=, 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:59.944 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:59.944 11:10:08 -- target/invalid.sh@28 -- # [[ t == \- ]] 00:18:59.944 11:10:08 -- target/invalid.sh@31 -- # echo 'tMhqA&ih1cN]Evjhn5$k,' 00:18:59.945 11:10:08 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'tMhqA&ih1cN]Evjhn5$k,' nqn.2016-06.io.spdk:cnode11778 00:19:00.206 [2024-04-18 11:10:08.301809] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11778: invalid serial number 'tMhqA&ih1cN]Evjhn5$k,' 00:19:00.206 11:10:08 -- target/invalid.sh@54 -- # out='2024/04/18 11:10:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11778 serial_number:tMhqA&ih1cN]Evjhn5$k,], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN tMhqA&ih1cN]Evjhn5$k, 00:19:00.206 request: 00:19:00.206 { 00:19:00.206 "method": "nvmf_create_subsystem", 00:19:00.206 "params": { 00:19:00.206 "nqn": "nqn.2016-06.io.spdk:cnode11778", 00:19:00.207 "serial_number": "tMhqA&ih1cN]Evjhn5$k," 00:19:00.207 } 00:19:00.207 } 00:19:00.207 Got JSON-RPC error response 00:19:00.207 GoRPCClient: error on JSON-RPC call' 00:19:00.207 11:10:08 -- target/invalid.sh@55 -- # [[ 2024/04/18 11:10:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11778 serial_number:tMhqA&ih1cN]Evjhn5$k,], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN tMhqA&ih1cN]Evjhn5$k, 00:19:00.207 request: 00:19:00.207 { 00:19:00.207 "method": "nvmf_create_subsystem", 00:19:00.207 "params": { 00:19:00.207 "nqn": "nqn.2016-06.io.spdk:cnode11778", 00:19:00.207 "serial_number": "tMhqA&ih1cN]Evjhn5$k," 00:19:00.207 } 00:19:00.207 } 00:19:00.207 Got JSON-RPC error response 00:19:00.207 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:00.207 11:10:08 -- target/invalid.sh@58 -- # gen_random_s 41 00:19:00.207 11:10:08 -- target/invalid.sh@19 -- # local length=41 ll 00:19:00.207 11:10:08 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:00.207 11:10:08 -- target/invalid.sh@21 -- # local chars 00:19:00.207 11:10:08 -- target/invalid.sh@22 -- # local string 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 111 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=o 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 37 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x25' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=% 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 43 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=+ 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 44 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=, 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 127 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=$'\177' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 119 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=w 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 96 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+='`' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 78 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=N 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 71 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=G 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 40 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x28' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+='(' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 77 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=M 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 41 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x29' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=')' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 81 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=Q 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 93 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=']' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 86 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x56' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=V 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 80 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=P 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 36 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+='$' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 124 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+='|' 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 99 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=c 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 58 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=: 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 103 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=g 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # printf %x 115 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x73' 00:19:00.207 11:10:08 -- target/invalid.sh@25 -- # string+=s 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.207 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 36 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+='$' 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 47 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=/ 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 87 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=W 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 66 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=B 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 35 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+='#' 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 57 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=9 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 98 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=b 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 72 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=H 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 70 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=F 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 109 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=m 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 46 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=. 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 87 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=W 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 113 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=q 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 114 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=r 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 83 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=S 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.467 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # printf %x 66 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:00.467 11:10:08 -- target/invalid.sh@25 -- # string+=B 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # printf %x 87 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # string+=W 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # printf %x 68 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # string+=D 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # printf %x 42 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:00.468 11:10:08 -- target/invalid.sh@25 -- # string+='*' 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:00.468 11:10:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:00.468 11:10:08 -- target/invalid.sh@28 -- # [[ o == \- ]] 00:19:00.468 11:10:08 -- target/invalid.sh@31 -- # echo 'o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD*' 00:19:00.468 11:10:08 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD*' nqn.2016-06.io.spdk:cnode18721 00:19:00.725 [2024-04-18 11:10:08.782481] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18721: invalid model number 'o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD*' 00:19:00.725 11:10:08 -- target/invalid.sh@58 -- # out='2024/04/18 11:10:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD* nqn:nqn.2016-06.io.spdk:cnode18721], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD* 00:19:00.725 request: 00:19:00.725 { 00:19:00.725 "method": "nvmf_create_subsystem", 00:19:00.725 "params": { 00:19:00.726 "nqn": "nqn.2016-06.io.spdk:cnode18721", 00:19:00.726 "model_number": "o%+,\u007fw`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD*" 00:19:00.726 } 00:19:00.726 } 00:19:00.726 Got JSON-RPC error response 00:19:00.726 GoRPCClient: error on JSON-RPC call' 00:19:00.726 11:10:08 -- target/invalid.sh@59 -- # [[ 2024/04/18 11:10:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD* nqn:nqn.2016-06.io.spdk:cnode18721], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN o%+,w`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD* 00:19:00.726 request: 00:19:00.726 { 00:19:00.726 "method": "nvmf_create_subsystem", 00:19:00.726 "params": { 00:19:00.726 "nqn": "nqn.2016-06.io.spdk:cnode18721", 00:19:00.726 "model_number": "o%+,\u007fw`NG(M)Q]VP$|c:gs$/WB#9bHFm.WqrSBWD*" 00:19:00.726 } 00:19:00.726 } 00:19:00.726 Got JSON-RPC error response 00:19:00.726 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:00.726 11:10:08 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:00.983 [2024-04-18 11:10:09.022962] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.984 11:10:09 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:01.241 11:10:09 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:01.241 11:10:09 -- target/invalid.sh@67 -- # head -n 1 00:19:01.241 11:10:09 -- target/invalid.sh@67 -- # echo '' 00:19:01.241 11:10:09 -- target/invalid.sh@67 -- # IP= 00:19:01.241 11:10:09 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:01.500 [2024-04-18 11:10:09.601615] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:01.500 11:10:09 -- target/invalid.sh@69 -- # out='2024/04/18 11:10:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:19:01.500 request: 00:19:01.500 { 00:19:01.500 "method": "nvmf_subsystem_remove_listener", 00:19:01.500 "params": { 00:19:01.500 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:01.500 "listen_address": { 00:19:01.500 "trtype": "tcp", 00:19:01.500 "traddr": "", 00:19:01.500 "trsvcid": "4421" 00:19:01.500 } 00:19:01.500 } 00:19:01.500 } 00:19:01.500 Got JSON-RPC error response 00:19:01.500 GoRPCClient: error on JSON-RPC call' 00:19:01.500 11:10:09 -- target/invalid.sh@70 -- # [[ 2024/04/18 11:10:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:19:01.500 request: 00:19:01.500 { 00:19:01.500 "method": "nvmf_subsystem_remove_listener", 00:19:01.500 "params": { 00:19:01.500 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:01.500 "listen_address": { 00:19:01.500 "trtype": "tcp", 00:19:01.500 "traddr": "", 00:19:01.500 "trsvcid": "4421" 00:19:01.500 } 00:19:01.500 } 00:19:01.500 } 00:19:01.500 Got JSON-RPC error response 00:19:01.500 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:01.500 11:10:09 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17798 -i 0 00:19:01.759 [2024-04-18 11:10:09.833841] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17798: invalid cntlid range [0-65519] 00:19:01.759 11:10:09 -- target/invalid.sh@73 -- # out='2024/04/18 11:10:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17798], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:19:01.759 request: 00:19:01.759 { 00:19:01.759 "method": "nvmf_create_subsystem", 00:19:01.759 "params": { 00:19:01.759 "nqn": "nqn.2016-06.io.spdk:cnode17798", 00:19:01.759 "min_cntlid": 0 00:19:01.759 } 00:19:01.759 } 00:19:01.759 Got JSON-RPC error response 00:19:01.759 GoRPCClient: error on JSON-RPC call' 00:19:01.759 11:10:09 -- target/invalid.sh@74 -- # [[ 2024/04/18 11:10:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17798], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:19:01.759 request: 00:19:01.759 { 00:19:01.759 "method": "nvmf_create_subsystem", 00:19:01.759 "params": { 00:19:01.759 "nqn": "nqn.2016-06.io.spdk:cnode17798", 00:19:01.759 "min_cntlid": 0 00:19:01.759 } 00:19:01.759 } 00:19:01.759 Got JSON-RPC error response 00:19:01.759 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:01.759 11:10:09 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13768 -i 65520 00:19:02.018 [2024-04-18 11:10:10.074173] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13768: invalid cntlid range [65520-65519] 00:19:02.018 11:10:10 -- target/invalid.sh@75 -- # out='2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13768], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:19:02.018 request: 00:19:02.018 { 00:19:02.018 "method": "nvmf_create_subsystem", 00:19:02.018 "params": { 00:19:02.018 "nqn": "nqn.2016-06.io.spdk:cnode13768", 00:19:02.018 "min_cntlid": 65520 00:19:02.018 } 00:19:02.018 } 00:19:02.018 Got JSON-RPC error response 00:19:02.018 GoRPCClient: error on JSON-RPC call' 00:19:02.018 11:10:10 -- target/invalid.sh@76 -- # [[ 2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13768], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:19:02.018 request: 00:19:02.018 { 00:19:02.018 "method": "nvmf_create_subsystem", 00:19:02.018 "params": { 00:19:02.018 "nqn": "nqn.2016-06.io.spdk:cnode13768", 00:19:02.018 "min_cntlid": 65520 00:19:02.018 } 00:19:02.018 } 00:19:02.018 Got JSON-RPC error response 00:19:02.018 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:02.018 11:10:10 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31062 -I 0 00:19:02.277 [2024-04-18 11:10:10.354598] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31062: invalid cntlid range [1-0] 00:19:02.277 11:10:10 -- target/invalid.sh@77 -- # out='2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31062], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:19:02.277 request: 00:19:02.277 { 00:19:02.277 "method": "nvmf_create_subsystem", 00:19:02.277 "params": { 00:19:02.277 "nqn": "nqn.2016-06.io.spdk:cnode31062", 00:19:02.277 "max_cntlid": 0 00:19:02.277 } 00:19:02.277 } 00:19:02.277 Got JSON-RPC error response 00:19:02.277 GoRPCClient: error on JSON-RPC call' 00:19:02.277 11:10:10 -- target/invalid.sh@78 -- # [[ 2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31062], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:19:02.277 request: 00:19:02.277 { 00:19:02.277 "method": "nvmf_create_subsystem", 00:19:02.277 "params": { 00:19:02.277 "nqn": "nqn.2016-06.io.spdk:cnode31062", 00:19:02.277 "max_cntlid": 0 00:19:02.277 } 00:19:02.277 } 00:19:02.277 Got JSON-RPC error response 00:19:02.277 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:02.277 11:10:10 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2952 -I 65520 00:19:02.536 [2024-04-18 11:10:10.623066] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2952: invalid cntlid range [1-65520] 00:19:02.536 11:10:10 -- target/invalid.sh@79 -- # out='2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2952], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:19:02.536 request: 00:19:02.536 { 00:19:02.536 "method": "nvmf_create_subsystem", 00:19:02.536 "params": { 00:19:02.536 "nqn": "nqn.2016-06.io.spdk:cnode2952", 00:19:02.536 "max_cntlid": 65520 00:19:02.536 } 00:19:02.536 } 00:19:02.536 Got JSON-RPC error response 00:19:02.536 GoRPCClient: error on JSON-RPC call' 00:19:02.536 11:10:10 -- target/invalid.sh@80 -- # [[ 2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2952], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:19:02.536 request: 00:19:02.536 { 00:19:02.536 "method": "nvmf_create_subsystem", 00:19:02.536 "params": { 00:19:02.536 "nqn": "nqn.2016-06.io.spdk:cnode2952", 00:19:02.536 "max_cntlid": 65520 00:19:02.536 } 00:19:02.536 } 00:19:02.536 Got JSON-RPC error response 00:19:02.536 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:02.536 11:10:10 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19647 -i 6 -I 5 00:19:02.793 [2024-04-18 11:10:10.875474] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19647: invalid cntlid range [6-5] 00:19:02.793 11:10:10 -- target/invalid.sh@83 -- # out='2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19647], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:19:02.793 request: 00:19:02.793 { 00:19:02.793 "method": "nvmf_create_subsystem", 00:19:02.793 "params": { 00:19:02.793 "nqn": "nqn.2016-06.io.spdk:cnode19647", 00:19:02.793 "min_cntlid": 6, 00:19:02.793 "max_cntlid": 5 00:19:02.793 } 00:19:02.793 } 00:19:02.793 Got JSON-RPC error response 00:19:02.793 GoRPCClient: error on JSON-RPC call' 00:19:02.794 11:10:10 -- target/invalid.sh@84 -- # [[ 2024/04/18 11:10:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19647], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:19:02.794 request: 00:19:02.794 { 00:19:02.794 "method": "nvmf_create_subsystem", 00:19:02.794 "params": { 00:19:02.794 "nqn": "nqn.2016-06.io.spdk:cnode19647", 00:19:02.794 "min_cntlid": 6, 00:19:02.794 "max_cntlid": 5 00:19:02.794 } 00:19:02.794 } 00:19:02.794 Got JSON-RPC error response 00:19:02.794 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:02.794 11:10:10 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:03.052 11:10:11 -- target/invalid.sh@87 -- # out='request: 00:19:03.052 { 00:19:03.052 "name": "foobar", 00:19:03.052 "method": "nvmf_delete_target", 00:19:03.052 "req_id": 1 00:19:03.052 } 00:19:03.052 Got JSON-RPC error response 00:19:03.052 response: 00:19:03.052 { 00:19:03.052 "code": -32602, 00:19:03.052 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:03.052 }' 00:19:03.052 11:10:11 -- target/invalid.sh@88 -- # [[ request: 00:19:03.052 { 00:19:03.052 "name": "foobar", 00:19:03.052 "method": "nvmf_delete_target", 00:19:03.052 "req_id": 1 00:19:03.052 } 00:19:03.052 Got JSON-RPC error response 00:19:03.052 response: 00:19:03.052 { 00:19:03.052 "code": -32602, 00:19:03.052 "message": "The specified target doesn't exist, cannot delete it." 00:19:03.052 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:03.052 11:10:11 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:03.052 11:10:11 -- target/invalid.sh@91 -- # nvmftestfini 00:19:03.052 11:10:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:03.052 11:10:11 -- nvmf/common.sh@117 -- # sync 00:19:03.052 11:10:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.052 11:10:11 -- nvmf/common.sh@120 -- # set +e 00:19:03.052 11:10:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.053 11:10:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.053 rmmod nvme_tcp 00:19:03.053 rmmod nvme_fabrics 00:19:03.053 rmmod nvme_keyring 00:19:03.053 11:10:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.053 11:10:11 -- nvmf/common.sh@124 -- # set -e 00:19:03.053 11:10:11 -- nvmf/common.sh@125 -- # return 0 00:19:03.053 11:10:11 -- nvmf/common.sh@478 -- # '[' -n 69133 ']' 00:19:03.053 11:10:11 -- nvmf/common.sh@479 -- # killprocess 69133 00:19:03.053 11:10:11 -- common/autotest_common.sh@936 -- # '[' -z 69133 ']' 00:19:03.053 11:10:11 -- common/autotest_common.sh@940 -- # kill -0 69133 00:19:03.053 11:10:11 -- common/autotest_common.sh@941 -- # uname 00:19:03.053 11:10:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.053 11:10:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69133 00:19:03.053 11:10:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:03.053 killing process with pid 69133 00:19:03.053 11:10:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:03.053 11:10:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69133' 00:19:03.053 11:10:11 -- common/autotest_common.sh@955 -- # kill 69133 00:19:03.053 11:10:11 -- common/autotest_common.sh@960 -- # wait 69133 00:19:04.436 11:10:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:04.436 11:10:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:04.436 11:10:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:04.436 11:10:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.436 11:10:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.436 11:10:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.436 11:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.436 11:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.436 11:10:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:04.436 00:19:04.436 real 0m6.761s 00:19:04.436 user 0m25.079s 00:19:04.436 sys 0m1.454s 00:19:04.436 11:10:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:04.436 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:19:04.436 ************************************ 00:19:04.436 END TEST nvmf_invalid 00:19:04.436 ************************************ 00:19:04.436 11:10:12 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:04.436 11:10:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.436 11:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.436 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:19:04.436 ************************************ 00:19:04.436 START TEST nvmf_abort 00:19:04.436 ************************************ 00:19:04.436 11:10:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:04.436 * Looking for test storage... 00:19:04.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:04.436 11:10:12 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:04.436 11:10:12 -- nvmf/common.sh@7 -- # uname -s 00:19:04.436 11:10:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.436 11:10:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.436 11:10:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.436 11:10:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.436 11:10:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.436 11:10:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.436 11:10:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.436 11:10:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.436 11:10:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.436 11:10:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.436 11:10:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:04.436 11:10:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:04.436 11:10:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.436 11:10:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.436 11:10:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:04.436 11:10:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.436 11:10:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:04.436 11:10:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.436 11:10:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.436 11:10:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.436 11:10:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.436 11:10:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.436 11:10:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.436 11:10:12 -- paths/export.sh@5 -- # export PATH 00:19:04.436 11:10:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.436 11:10:12 -- nvmf/common.sh@47 -- # : 0 00:19:04.436 11:10:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.436 11:10:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.436 11:10:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.436 11:10:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.436 11:10:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.436 11:10:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.436 11:10:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.436 11:10:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.436 11:10:12 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.436 11:10:12 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:04.436 11:10:12 -- target/abort.sh@14 -- # nvmftestinit 00:19:04.436 11:10:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:04.436 11:10:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.436 11:10:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:04.436 11:10:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:04.436 11:10:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:04.436 11:10:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.436 11:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.436 11:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.436 11:10:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:04.436 11:10:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:04.437 11:10:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:04.437 11:10:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:04.437 11:10:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:04.437 11:10:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:04.437 11:10:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.437 11:10:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.437 11:10:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:04.437 11:10:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:04.437 11:10:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:04.437 11:10:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:04.437 11:10:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:04.437 11:10:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.437 11:10:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:04.437 11:10:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:04.437 11:10:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:04.437 11:10:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:04.437 11:10:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:04.437 11:10:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:04.437 Cannot find device "nvmf_tgt_br" 00:19:04.437 11:10:12 -- nvmf/common.sh@155 -- # true 00:19:04.437 11:10:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:04.437 Cannot find device "nvmf_tgt_br2" 00:19:04.437 11:10:12 -- nvmf/common.sh@156 -- # true 00:19:04.437 11:10:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:04.437 11:10:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:04.437 Cannot find device "nvmf_tgt_br" 00:19:04.437 11:10:12 -- nvmf/common.sh@158 -- # true 00:19:04.437 11:10:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:04.695 Cannot find device "nvmf_tgt_br2" 00:19:04.695 11:10:12 -- nvmf/common.sh@159 -- # true 00:19:04.695 11:10:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:04.695 11:10:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:04.695 11:10:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:04.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.695 11:10:12 -- nvmf/common.sh@162 -- # true 00:19:04.695 11:10:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:04.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.695 11:10:12 -- nvmf/common.sh@163 -- # true 00:19:04.695 11:10:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:04.696 11:10:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:04.696 11:10:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:04.696 11:10:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:04.696 11:10:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:04.696 11:10:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:04.696 11:10:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:04.696 11:10:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:04.696 11:10:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:04.696 11:10:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:04.696 11:10:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:04.696 11:10:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:04.696 11:10:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:04.696 11:10:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:04.696 11:10:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:04.696 11:10:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:04.696 11:10:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:04.696 11:10:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:04.696 11:10:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:04.696 11:10:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:04.696 11:10:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:04.696 11:10:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:04.696 11:10:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:04.696 11:10:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:04.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:04.696 00:19:04.696 --- 10.0.0.2 ping statistics --- 00:19:04.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.696 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:04.696 11:10:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:04.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:04.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:04.696 00:19:04.696 --- 10.0.0.3 ping statistics --- 00:19:04.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.696 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:04.696 11:10:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:04.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:19:04.696 00:19:04.696 --- 10.0.0.1 ping statistics --- 00:19:04.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.696 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:04.696 11:10:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.696 11:10:12 -- nvmf/common.sh@422 -- # return 0 00:19:04.696 11:10:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:04.696 11:10:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.696 11:10:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:04.696 11:10:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:04.696 11:10:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.696 11:10:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:04.696 11:10:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:04.954 11:10:12 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:04.954 11:10:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:04.954 11:10:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.954 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:19:04.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.954 11:10:12 -- nvmf/common.sh@470 -- # nvmfpid=69663 00:19:04.954 11:10:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:04.954 11:10:12 -- nvmf/common.sh@471 -- # waitforlisten 69663 00:19:04.954 11:10:12 -- common/autotest_common.sh@817 -- # '[' -z 69663 ']' 00:19:04.954 11:10:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.954 11:10:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.954 11:10:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.954 11:10:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.954 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:19:04.954 [2024-04-18 11:10:13.037508] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:04.954 [2024-04-18 11:10:13.037929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.212 [2024-04-18 11:10:13.209343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.469 [2024-04-18 11:10:13.455619] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.469 [2024-04-18 11:10:13.455957] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.469 [2024-04-18 11:10:13.456393] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.469 [2024-04-18 11:10:13.456846] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.469 [2024-04-18 11:10:13.457086] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.469 [2024-04-18 11:10:13.457282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.469 [2024-04-18 11:10:13.457677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.469 [2024-04-18 11:10:13.457698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.727 11:10:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.727 11:10:13 -- common/autotest_common.sh@850 -- # return 0 00:19:05.727 11:10:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:05.727 11:10:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:05.727 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:05.727 11:10:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.727 11:10:13 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:05.727 11:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.727 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 [2024-04-18 11:10:13.959454] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.985 11:10:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:13 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:05.985 11:10:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 Malloc0 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:05.985 11:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 Delay0 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:05.985 11:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:05.985 11:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:05.985 11:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 [2024-04-18 11:10:14.089741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:05.985 11:10:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.985 11:10:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.985 11:10:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.985 11:10:14 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:06.243 [2024-04-18 11:10:14.318868] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:08.206 Initializing NVMe Controllers 00:19:08.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:08.206 controller IO queue size 128 less than required 00:19:08.206 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:08.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:08.206 Initialization complete. Launching workers. 00:19:08.206 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27642 00:19:08.206 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27703, failed to submit 66 00:19:08.206 success 27642, unsuccess 61, failed 0 00:19:08.206 11:10:16 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:08.206 11:10:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.206 11:10:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.206 11:10:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.206 11:10:16 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:08.206 11:10:16 -- target/abort.sh@38 -- # nvmftestfini 00:19:08.206 11:10:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:08.206 11:10:16 -- nvmf/common.sh@117 -- # sync 00:19:08.465 11:10:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.465 11:10:16 -- nvmf/common.sh@120 -- # set +e 00:19:08.465 11:10:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.465 11:10:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.465 rmmod nvme_tcp 00:19:08.465 rmmod nvme_fabrics 00:19:08.465 rmmod nvme_keyring 00:19:08.465 11:10:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.465 11:10:16 -- nvmf/common.sh@124 -- # set -e 00:19:08.465 11:10:16 -- nvmf/common.sh@125 -- # return 0 00:19:08.465 11:10:16 -- nvmf/common.sh@478 -- # '[' -n 69663 ']' 00:19:08.465 11:10:16 -- nvmf/common.sh@479 -- # killprocess 69663 00:19:08.465 11:10:16 -- common/autotest_common.sh@936 -- # '[' -z 69663 ']' 00:19:08.465 11:10:16 -- common/autotest_common.sh@940 -- # kill -0 69663 00:19:08.465 11:10:16 -- common/autotest_common.sh@941 -- # uname 00:19:08.465 11:10:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.465 11:10:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69663 00:19:08.465 killing process with pid 69663 00:19:08.465 11:10:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:08.465 11:10:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:08.465 11:10:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69663' 00:19:08.465 11:10:16 -- common/autotest_common.sh@955 -- # kill 69663 00:19:08.465 11:10:16 -- common/autotest_common.sh@960 -- # wait 69663 00:19:09.841 11:10:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:09.841 11:10:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:09.841 11:10:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:09.841 11:10:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.841 11:10:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.841 11:10:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.841 11:10:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.841 11:10:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.841 11:10:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:09.841 00:19:09.841 real 0m5.480s 00:19:09.841 user 0m14.534s 00:19:09.841 sys 0m1.158s 00:19:09.841 11:10:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:09.841 11:10:17 -- common/autotest_common.sh@10 -- # set +x 00:19:09.841 ************************************ 00:19:09.841 END TEST nvmf_abort 00:19:09.841 ************************************ 00:19:09.841 11:10:18 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:09.841 11:10:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:09.841 11:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:09.841 11:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:10.100 ************************************ 00:19:10.100 START TEST nvmf_ns_hotplug_stress 00:19:10.100 ************************************ 00:19:10.100 11:10:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:10.100 * Looking for test storage... 00:19:10.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:10.100 11:10:18 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.100 11:10:18 -- nvmf/common.sh@7 -- # uname -s 00:19:10.100 11:10:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.100 11:10:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.100 11:10:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.100 11:10:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.100 11:10:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.100 11:10:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.100 11:10:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.100 11:10:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.100 11:10:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.100 11:10:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:10.100 11:10:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:10.100 11:10:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.100 11:10:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.100 11:10:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.100 11:10:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.100 11:10:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.100 11:10:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.100 11:10:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.100 11:10:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.100 11:10:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.100 11:10:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.100 11:10:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.100 11:10:18 -- paths/export.sh@5 -- # export PATH 00:19:10.100 11:10:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.100 11:10:18 -- nvmf/common.sh@47 -- # : 0 00:19:10.100 11:10:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.100 11:10:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.100 11:10:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.100 11:10:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.100 11:10:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.100 11:10:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.100 11:10:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.100 11:10:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.100 11:10:18 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.100 11:10:18 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:19:10.100 11:10:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.100 11:10:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.100 11:10:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.100 11:10:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.100 11:10:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.100 11:10:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.100 11:10:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.100 11:10:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.100 11:10:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:10.100 11:10:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:10.100 11:10:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.100 11:10:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.100 11:10:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:10.100 11:10:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:10.100 11:10:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.100 11:10:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.100 11:10:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.100 11:10:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.100 11:10:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.100 11:10:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.100 11:10:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.100 11:10:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.100 11:10:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:10.100 11:10:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:10.100 Cannot find device "nvmf_tgt_br" 00:19:10.100 11:10:18 -- nvmf/common.sh@155 -- # true 00:19:10.100 11:10:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.100 Cannot find device "nvmf_tgt_br2" 00:19:10.100 11:10:18 -- nvmf/common.sh@156 -- # true 00:19:10.100 11:10:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:10.100 11:10:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:10.100 Cannot find device "nvmf_tgt_br" 00:19:10.100 11:10:18 -- nvmf/common.sh@158 -- # true 00:19:10.100 11:10:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:10.100 Cannot find device "nvmf_tgt_br2" 00:19:10.100 11:10:18 -- nvmf/common.sh@159 -- # true 00:19:10.100 11:10:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:10.100 11:10:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:10.359 11:10:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.359 11:10:18 -- nvmf/common.sh@162 -- # true 00:19:10.359 11:10:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.359 11:10:18 -- nvmf/common.sh@163 -- # true 00:19:10.359 11:10:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.359 11:10:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.359 11:10:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.359 11:10:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.359 11:10:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.359 11:10:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.359 11:10:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.359 11:10:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.359 11:10:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.359 11:10:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:10.359 11:10:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:10.359 11:10:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:10.359 11:10:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:10.359 11:10:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.359 11:10:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.359 11:10:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.359 11:10:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:10.359 11:10:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:10.359 11:10:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.359 11:10:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.359 11:10:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.359 11:10:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.359 11:10:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.359 11:10:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:10.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:10.359 00:19:10.359 --- 10.0.0.2 ping statistics --- 00:19:10.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.359 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:10.359 11:10:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:10.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:10.359 00:19:10.359 --- 10.0.0.3 ping statistics --- 00:19:10.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.359 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:10.359 11:10:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:10.359 00:19:10.359 --- 10.0.0.1 ping statistics --- 00:19:10.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.359 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:10.359 11:10:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.360 11:10:18 -- nvmf/common.sh@422 -- # return 0 00:19:10.360 11:10:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:10.360 11:10:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.360 11:10:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:10.360 11:10:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:10.360 11:10:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.360 11:10:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:10.360 11:10:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:10.360 11:10:18 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:19:10.360 11:10:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:10.360 11:10:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:10.360 11:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:10.360 11:10:18 -- nvmf/common.sh@470 -- # nvmfpid=69950 00:19:10.360 11:10:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:10.360 11:10:18 -- nvmf/common.sh@471 -- # waitforlisten 69950 00:19:10.360 11:10:18 -- common/autotest_common.sh@817 -- # '[' -z 69950 ']' 00:19:10.360 11:10:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.360 11:10:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.360 11:10:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.360 11:10:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.360 11:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:10.618 [2024-04-18 11:10:18.641137] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:10.618 [2024-04-18 11:10:18.641352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.618 [2024-04-18 11:10:18.817208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.184 [2024-04-18 11:10:19.166067] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.184 [2024-04-18 11:10:19.166167] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.184 [2024-04-18 11:10:19.166208] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.184 [2024-04-18 11:10:19.166236] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.184 [2024-04-18 11:10:19.166252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.184 [2024-04-18 11:10:19.166456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.184 [2024-04-18 11:10:19.166571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.184 [2024-04-18 11:10:19.166586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.442 11:10:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:11.442 11:10:19 -- common/autotest_common.sh@850 -- # return 0 00:19:11.442 11:10:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:11.442 11:10:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:11.442 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.442 11:10:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.442 11:10:19 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:19:11.442 11:10:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:11.700 [2024-04-18 11:10:19.848773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.700 11:10:19 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:12.265 11:10:20 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.266 [2024-04-18 11:10:20.470058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.523 11:10:20 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:12.523 11:10:20 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:12.782 Malloc0 00:19:13.040 11:10:21 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:13.298 Delay0 00:19:13.298 11:10:21 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:13.298 11:10:21 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:13.555 NULL1 00:19:13.555 11:10:21 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:13.813 11:10:21 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=70087 00:19:13.813 11:10:21 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:13.813 11:10:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:13.813 11:10:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.187 Read completed with error (sct=0, sc=11) 00:19:15.187 11:10:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:15.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:15.445 11:10:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:19:15.445 11:10:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:15.702 true 00:19:15.702 11:10:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:15.702 11:10:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.634 11:10:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.634 11:10:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:19:16.634 11:10:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:16.892 true 00:19:16.892 11:10:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:16.892 11:10:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.150 11:10:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.716 11:10:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:19:17.716 11:10:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:17.716 true 00:19:17.716 11:10:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:17.716 11:10:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.974 11:10:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.267 11:10:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:19:18.267 11:10:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:18.557 true 00:19:18.557 11:10:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:18.557 11:10:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.489 11:10:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:19.747 11:10:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:19:19.747 11:10:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:20.005 true 00:19:20.005 11:10:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:20.005 11:10:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:20.263 11:10:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.521 11:10:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:19:20.521 11:10:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:20.521 true 00:19:20.521 11:10:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:20.521 11:10:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:21.456 11:10:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:21.714 11:10:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:19:21.714 11:10:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:21.985 true 00:19:21.985 11:10:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:21.986 11:10:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.260 11:10:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:22.518 11:10:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:19:22.518 11:10:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:22.775 true 00:19:22.775 11:10:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:22.775 11:10:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.034 11:10:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:23.292 11:10:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:19:23.292 11:10:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:23.550 true 00:19:23.550 11:10:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:23.550 11:10:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:24.484 11:10:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:24.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:24.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:24.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:24.742 11:10:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:19:24.742 11:10:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:25.000 true 00:19:25.000 11:10:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:25.000 11:10:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.993 11:10:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:25.993 11:10:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:19:25.993 11:10:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:26.251 true 00:19:26.251 11:10:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:26.251 11:10:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.508 11:10:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:26.766 11:10:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:19:26.766 11:10:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:27.022 true 00:19:27.022 11:10:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:27.022 11:10:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.954 11:10:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.954 11:10:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:19:27.954 11:10:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:28.210 true 00:19:28.210 11:10:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:28.210 11:10:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.469 11:10:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:28.727 11:10:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:19:28.727 11:10:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:28.984 true 00:19:28.984 11:10:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:28.984 11:10:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.242 11:10:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.500 11:10:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:19:29.500 11:10:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:29.758 true 00:19:29.758 11:10:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:29.758 11:10:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.947 11:10:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.205 11:10:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:19:31.205 11:10:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:31.463 true 00:19:31.463 11:10:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:31.463 11:10:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.052 11:10:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.323 11:10:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:19:32.323 11:10:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:32.581 true 00:19:32.581 11:10:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:32.581 11:10:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.838 11:10:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.096 11:10:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:19:33.096 11:10:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:33.354 true 00:19:33.354 11:10:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:33.354 11:10:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.612 11:10:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.870 11:10:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:19:33.870 11:10:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:34.128 true 00:19:34.128 11:10:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:34.128 11:10:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.062 11:10:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.321 11:10:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:19:35.321 11:10:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:35.582 true 00:19:35.841 11:10:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:35.841 11:10:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.841 11:10:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.099 11:10:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:19:36.099 11:10:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:36.358 true 00:19:36.358 11:10:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:36.358 11:10:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.290 11:10:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:37.548 11:10:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:19:37.548 11:10:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:37.806 true 00:19:37.806 11:10:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:37.806 11:10:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.063 11:10:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.321 11:10:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:19:38.321 11:10:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:38.578 true 00:19:38.578 11:10:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:38.578 11:10:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.835 11:10:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.092 11:10:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:19:39.092 11:10:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:39.349 true 00:19:39.349 11:10:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:39.349 11:10:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:40.280 11:10:48 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:40.537 11:10:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:19:40.537 11:10:48 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:40.795 true 00:19:40.795 11:10:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:40.795 11:10:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.052 11:10:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.310 11:10:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:19:41.310 11:10:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:41.310 true 00:19:41.567 11:10:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:41.567 11:10:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.825 11:10:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.825 11:10:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:19:41.825 11:10:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:42.083 true 00:19:42.341 11:10:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:42.341 11:10:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.298 11:10:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.556 11:10:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:19:43.556 11:10:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:43.813 true 00:19:43.813 11:10:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:43.813 11:10:51 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.071 11:10:52 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.330 Initializing NVMe Controllers 00:19:44.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.330 Controller IO queue size 128, less than required. 00:19:44.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:44.330 Controller IO queue size 128, less than required. 00:19:44.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:44.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:44.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:44.330 Initialization complete. Launching workers. 00:19:44.330 ======================================================== 00:19:44.330 Latency(us) 00:19:44.330 Device Information : IOPS MiB/s Average min max 00:19:44.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 666.31 0.33 94605.48 4486.14 1032645.80 00:19:44.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7476.92 3.65 17118.69 4453.26 638343.12 00:19:44.330 ======================================================== 00:19:44.330 Total : 8143.23 3.98 23459.00 4453.26 1032645.80 00:19:44.330 00:19:44.330 11:10:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:19:44.330 11:10:52 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:44.632 true 00:19:44.632 11:10:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70087 00:19:44.632 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (70087) - No such process 00:19:44.632 11:10:52 -- target/ns_hotplug_stress.sh@44 -- # wait 70087 00:19:44.632 11:10:52 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:44.632 11:10:52 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:19:44.632 11:10:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:44.632 11:10:52 -- nvmf/common.sh@117 -- # sync 00:19:44.632 11:10:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.632 11:10:52 -- nvmf/common.sh@120 -- # set +e 00:19:44.632 11:10:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.632 11:10:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.632 rmmod nvme_tcp 00:19:44.632 rmmod nvme_fabrics 00:19:44.632 rmmod nvme_keyring 00:19:44.632 11:10:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.632 11:10:52 -- nvmf/common.sh@124 -- # set -e 00:19:44.632 11:10:52 -- nvmf/common.sh@125 -- # return 0 00:19:44.632 11:10:52 -- nvmf/common.sh@478 -- # '[' -n 69950 ']' 00:19:44.632 11:10:52 -- nvmf/common.sh@479 -- # killprocess 69950 00:19:44.632 11:10:52 -- common/autotest_common.sh@936 -- # '[' -z 69950 ']' 00:19:44.632 11:10:52 -- common/autotest_common.sh@940 -- # kill -0 69950 00:19:44.632 11:10:52 -- common/autotest_common.sh@941 -- # uname 00:19:44.632 11:10:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:44.632 11:10:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69950 00:19:44.633 killing process with pid 69950 00:19:44.633 11:10:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:44.633 11:10:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:44.633 11:10:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69950' 00:19:44.633 11:10:52 -- common/autotest_common.sh@955 -- # kill 69950 00:19:44.633 11:10:52 -- common/autotest_common.sh@960 -- # wait 69950 00:19:46.008 11:10:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:46.008 11:10:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:46.008 11:10:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:46.008 11:10:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.008 11:10:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.008 11:10:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.008 11:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.008 11:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.008 11:10:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:46.008 ************************************ 00:19:46.008 END TEST nvmf_ns_hotplug_stress 00:19:46.008 ************************************ 00:19:46.008 00:19:46.008 real 0m35.961s 00:19:46.008 user 2m31.477s 00:19:46.008 sys 0m7.389s 00:19:46.008 11:10:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.008 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:46.008 11:10:54 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:46.008 11:10:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:46.008 11:10:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.008 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:46.008 ************************************ 00:19:46.008 START TEST nvmf_connect_stress 00:19:46.008 ************************************ 00:19:46.008 11:10:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:46.266 * Looking for test storage... 00:19:46.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:46.266 11:10:54 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.266 11:10:54 -- nvmf/common.sh@7 -- # uname -s 00:19:46.266 11:10:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.266 11:10:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.266 11:10:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.266 11:10:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.266 11:10:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.266 11:10:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.266 11:10:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.266 11:10:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.266 11:10:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.266 11:10:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.266 11:10:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:46.266 11:10:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:46.266 11:10:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.266 11:10:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.266 11:10:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.266 11:10:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.266 11:10:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.266 11:10:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.266 11:10:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.266 11:10:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.266 11:10:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.266 11:10:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.266 11:10:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.266 11:10:54 -- paths/export.sh@5 -- # export PATH 00:19:46.266 11:10:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.266 11:10:54 -- nvmf/common.sh@47 -- # : 0 00:19:46.266 11:10:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.267 11:10:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.267 11:10:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.267 11:10:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.267 11:10:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.267 11:10:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.267 11:10:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.267 11:10:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.267 11:10:54 -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:46.267 11:10:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:46.267 11:10:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.267 11:10:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:46.267 11:10:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:46.267 11:10:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:46.267 11:10:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.267 11:10:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.267 11:10:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.267 11:10:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:46.267 11:10:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:46.267 11:10:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:46.267 11:10:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:46.267 11:10:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:46.267 11:10:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:46.267 11:10:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.267 11:10:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.267 11:10:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.267 11:10:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:46.267 11:10:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.267 11:10:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.267 11:10:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.267 11:10:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.267 11:10:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.267 11:10:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.267 11:10:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.267 11:10:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.267 11:10:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:46.267 11:10:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:46.267 Cannot find device "nvmf_tgt_br" 00:19:46.267 11:10:54 -- nvmf/common.sh@155 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.267 Cannot find device "nvmf_tgt_br2" 00:19:46.267 11:10:54 -- nvmf/common.sh@156 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:46.267 11:10:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:46.267 Cannot find device "nvmf_tgt_br" 00:19:46.267 11:10:54 -- nvmf/common.sh@158 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:46.267 Cannot find device "nvmf_tgt_br2" 00:19:46.267 11:10:54 -- nvmf/common.sh@159 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:46.267 11:10:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:46.267 11:10:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.267 11:10:54 -- nvmf/common.sh@162 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.267 11:10:54 -- nvmf/common.sh@163 -- # true 00:19:46.267 11:10:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.267 11:10:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.267 11:10:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.267 11:10:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.267 11:10:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.267 11:10:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.267 11:10:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.267 11:10:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.524 11:10:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.524 11:10:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.524 11:10:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.524 11:10:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.524 11:10:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.524 11:10:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.524 11:10:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.524 11:10:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.524 11:10:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.524 11:10:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.524 11:10:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.524 11:10:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.524 11:10:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.524 11:10:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.524 11:10:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.524 11:10:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:19:46.524 00:19:46.524 --- 10.0.0.2 ping statistics --- 00:19:46.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.524 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:46.524 11:10:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:46.524 00:19:46.524 --- 10.0.0.3 ping statistics --- 00:19:46.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.524 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:46.524 11:10:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:46.524 00:19:46.524 --- 10.0.0.1 ping statistics --- 00:19:46.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.524 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:46.524 11:10:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.524 11:10:54 -- nvmf/common.sh@422 -- # return 0 00:19:46.524 11:10:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:46.524 11:10:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.524 11:10:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:46.524 11:10:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:46.524 11:10:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.524 11:10:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:46.524 11:10:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:46.524 11:10:54 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:46.524 11:10:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:46.524 11:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:46.524 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:46.524 11:10:54 -- nvmf/common.sh@470 -- # nvmfpid=71228 00:19:46.524 11:10:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:46.524 11:10:54 -- nvmf/common.sh@471 -- # waitforlisten 71228 00:19:46.524 11:10:54 -- common/autotest_common.sh@817 -- # '[' -z 71228 ']' 00:19:46.524 11:10:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.524 11:10:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.524 11:10:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.524 11:10:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.524 11:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:46.524 [2024-04-18 11:10:54.742346] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:46.524 [2024-04-18 11:10:54.742513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.782 [2024-04-18 11:10:54.923408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.041 [2024-04-18 11:10:55.179061] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.041 [2024-04-18 11:10:55.179143] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.041 [2024-04-18 11:10:55.179164] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.041 [2024-04-18 11:10:55.179193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.041 [2024-04-18 11:10:55.179208] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.041 [2024-04-18 11:10:55.179938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.041 [2024-04-18 11:10:55.180191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.041 [2024-04-18 11:10:55.180248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.607 11:10:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.607 11:10:55 -- common/autotest_common.sh@850 -- # return 0 00:19:47.607 11:10:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:47.607 11:10:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.607 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.607 11:10:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.607 11:10:55 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.607 11:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.607 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.607 [2024-04-18 11:10:55.723261] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.607 11:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.607 11:10:55 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:47.607 11:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.607 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.607 11:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.607 11:10:55 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.607 11:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.607 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.607 [2024-04-18 11:10:55.745435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.607 11:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.607 11:10:55 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:47.607 11:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.607 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.607 NULL1 00:19:47.607 11:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.607 11:10:55 -- target/connect_stress.sh@21 -- # PERF_PID=71282 00:19:47.607 11:10:55 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:47.607 11:10:55 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:47.607 11:10:55 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # seq 1 20 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.607 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.607 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.608 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.608 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.608 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.608 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.865 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.865 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.865 11:10:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:47.865 11:10:55 -- target/connect_stress.sh@28 -- # cat 00:19:47.865 11:10:55 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:47.865 11:10:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:47.865 11:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.865 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:48.123 11:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.123 11:10:56 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:48.123 11:10:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.123 11:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.123 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.381 11:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.381 11:10:56 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:48.381 11:10:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.381 11:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.381 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.640 11:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.640 11:10:56 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:48.640 11:10:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.640 11:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.640 11:10:56 -- common/autotest_common.sh@10 -- # set +x 00:19:49.206 11:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.206 11:10:57 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:49.206 11:10:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.206 11:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.206 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:49.467 11:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.467 11:10:57 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:49.467 11:10:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.467 11:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.467 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:49.731 11:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.731 11:10:57 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:49.731 11:10:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.731 11:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.731 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:49.990 11:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.990 11:10:58 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:49.990 11:10:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.990 11:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.990 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.554 11:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.554 11:10:58 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:50.554 11:10:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.554 11:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.554 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.811 11:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.811 11:10:58 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:50.811 11:10:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.811 11:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.811 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:19:51.067 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.067 11:10:59 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:51.067 11:10:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.067 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.067 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.324 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.324 11:10:59 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:51.324 11:10:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.324 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.324 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.582 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.582 11:10:59 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:51.582 11:10:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.582 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.582 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:19:52.148 11:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.148 11:11:00 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:52.148 11:11:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.148 11:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.148 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.406 11:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.406 11:11:00 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:52.406 11:11:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.406 11:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.406 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.664 11:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.664 11:11:00 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:52.664 11:11:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.664 11:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.664 11:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.921 11:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.921 11:11:01 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:52.921 11:11:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.921 11:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.921 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.486 11:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.486 11:11:01 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:53.486 11:11:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.486 11:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.486 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.743 11:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.743 11:11:01 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:53.743 11:11:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.743 11:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.743 11:11:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 11:11:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.999 11:11:02 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:53.999 11:11:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.999 11:11:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.999 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:54.255 11:11:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.255 11:11:02 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:54.255 11:11:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.255 11:11:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.255 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:54.819 11:11:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.819 11:11:02 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:54.819 11:11:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.819 11:11:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.819 11:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:55.075 11:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.075 11:11:03 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:55.075 11:11:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.075 11:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.075 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:55.332 11:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.332 11:11:03 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:55.332 11:11:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.332 11:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.332 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:55.589 11:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.589 11:11:03 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:55.589 11:11:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.589 11:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.589 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:55.912 11:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.912 11:11:04 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:55.912 11:11:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.912 11:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.912 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.478 11:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.478 11:11:04 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:56.478 11:11:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.478 11:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.478 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.736 11:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.736 11:11:04 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:56.736 11:11:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.736 11:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.736 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.994 11:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.994 11:11:05 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:56.994 11:11:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.994 11:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.994 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.252 11:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.252 11:11:05 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:57.252 11:11:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.252 11:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.252 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.819 11:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.819 11:11:05 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:57.819 11:11:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.819 11:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.819 11:11:05 -- common/autotest_common.sh@10 -- # set +x 00:19:58.078 11:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.078 11:11:06 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:58.078 11:11:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:58.078 11:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.078 11:11:06 -- common/autotest_common.sh@10 -- # set +x 00:19:58.078 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.336 11:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.336 11:11:06 -- target/connect_stress.sh@34 -- # kill -0 71282 00:19:58.336 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71282) - No such process 00:19:58.336 11:11:06 -- target/connect_stress.sh@38 -- # wait 71282 00:19:58.336 11:11:06 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:58.336 11:11:06 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:58.336 11:11:06 -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:58.336 11:11:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:58.336 11:11:06 -- nvmf/common.sh@117 -- # sync 00:19:58.336 11:11:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.337 11:11:06 -- nvmf/common.sh@120 -- # set +e 00:19:58.337 11:11:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.337 11:11:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.337 rmmod nvme_tcp 00:19:58.337 rmmod nvme_fabrics 00:19:58.337 rmmod nvme_keyring 00:19:58.337 11:11:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.337 11:11:06 -- nvmf/common.sh@124 -- # set -e 00:19:58.337 11:11:06 -- nvmf/common.sh@125 -- # return 0 00:19:58.337 11:11:06 -- nvmf/common.sh@478 -- # '[' -n 71228 ']' 00:19:58.337 11:11:06 -- nvmf/common.sh@479 -- # killprocess 71228 00:19:58.337 11:11:06 -- common/autotest_common.sh@936 -- # '[' -z 71228 ']' 00:19:58.337 11:11:06 -- common/autotest_common.sh@940 -- # kill -0 71228 00:19:58.337 11:11:06 -- common/autotest_common.sh@941 -- # uname 00:19:58.337 11:11:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.337 11:11:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71228 00:19:58.337 killing process with pid 71228 00:19:58.337 11:11:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:58.337 11:11:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:58.337 11:11:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71228' 00:19:58.337 11:11:06 -- common/autotest_common.sh@955 -- # kill 71228 00:19:58.337 11:11:06 -- common/autotest_common.sh@960 -- # wait 71228 00:19:59.727 11:11:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:59.727 11:11:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:59.727 11:11:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:59.727 11:11:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.727 11:11:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.727 11:11:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.727 11:11:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.727 11:11:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.727 11:11:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.727 00:19:59.727 real 0m13.617s 00:19:59.727 user 0m43.499s 00:19:59.727 sys 0m3.530s 00:19:59.727 11:11:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.727 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:19:59.727 ************************************ 00:19:59.727 END TEST nvmf_connect_stress 00:19:59.727 ************************************ 00:19:59.727 11:11:07 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:59.727 11:11:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.727 11:11:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.727 11:11:07 -- common/autotest_common.sh@10 -- # set +x 00:19:59.727 ************************************ 00:19:59.727 START TEST nvmf_fused_ordering 00:19:59.727 ************************************ 00:19:59.727 11:11:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:59.993 * Looking for test storage... 00:19:59.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:59.994 11:11:07 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.994 11:11:07 -- nvmf/common.sh@7 -- # uname -s 00:19:59.994 11:11:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.994 11:11:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.994 11:11:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.994 11:11:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.994 11:11:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.994 11:11:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.994 11:11:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.994 11:11:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.994 11:11:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.994 11:11:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.994 11:11:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:59.994 11:11:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:19:59.994 11:11:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.994 11:11:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.994 11:11:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.994 11:11:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.994 11:11:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.994 11:11:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.994 11:11:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.994 11:11:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.994 11:11:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.994 11:11:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.994 11:11:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.994 11:11:08 -- paths/export.sh@5 -- # export PATH 00:19:59.994 11:11:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.994 11:11:08 -- nvmf/common.sh@47 -- # : 0 00:19:59.994 11:11:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.994 11:11:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.994 11:11:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.994 11:11:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.994 11:11:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.994 11:11:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.994 11:11:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.994 11:11:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.994 11:11:08 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:59.994 11:11:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:59.994 11:11:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.994 11:11:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:59.994 11:11:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:59.994 11:11:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:59.994 11:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.994 11:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.994 11:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.994 11:11:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:59.994 11:11:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:59.994 11:11:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:59.994 11:11:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:59.994 11:11:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:59.994 11:11:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:59.994 11:11:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.994 11:11:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.994 11:11:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.994 11:11:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:59.994 11:11:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.994 11:11:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.994 11:11:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.994 11:11:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.994 11:11:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.994 11:11:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.994 11:11:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.994 11:11:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.994 11:11:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:59.994 11:11:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:59.994 Cannot find device "nvmf_tgt_br" 00:19:59.994 11:11:08 -- nvmf/common.sh@155 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.994 Cannot find device "nvmf_tgt_br2" 00:19:59.994 11:11:08 -- nvmf/common.sh@156 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:59.994 11:11:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:59.994 Cannot find device "nvmf_tgt_br" 00:19:59.994 11:11:08 -- nvmf/common.sh@158 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:59.994 Cannot find device "nvmf_tgt_br2" 00:19:59.994 11:11:08 -- nvmf/common.sh@159 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:59.994 11:11:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:59.994 11:11:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.994 11:11:08 -- nvmf/common.sh@162 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.994 11:11:08 -- nvmf/common.sh@163 -- # true 00:19:59.994 11:11:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.994 11:11:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.994 11:11:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.994 11:11:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.994 11:11:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.994 11:11:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.253 11:11:08 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.253 11:11:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.253 11:11:08 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.253 11:11:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.253 11:11:08 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.253 11:11:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.253 11:11:08 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.253 11:11:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.253 11:11:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.253 11:11:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.253 11:11:08 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.253 11:11:08 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.253 11:11:08 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.253 11:11:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.253 11:11:08 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.253 11:11:08 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.253 11:11:08 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.253 11:11:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:00.253 00:20:00.253 --- 10.0.0.2 ping statistics --- 00:20:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.253 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:00.253 11:11:08 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:00.253 00:20:00.253 --- 10.0.0.3 ping statistics --- 00:20:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.253 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:00.253 11:11:08 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:20:00.253 00:20:00.253 --- 10.0.0.1 ping statistics --- 00:20:00.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.253 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:00.253 11:11:08 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.253 11:11:08 -- nvmf/common.sh@422 -- # return 0 00:20:00.253 11:11:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.253 11:11:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.253 11:11:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.253 11:11:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.253 11:11:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.253 11:11:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.253 11:11:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:00.253 11:11:08 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:00.253 11:11:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:00.253 11:11:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:00.253 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:20:00.253 11:11:08 -- nvmf/common.sh@470 -- # nvmfpid=71622 00:20:00.253 11:11:08 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.253 11:11:08 -- nvmf/common.sh@471 -- # waitforlisten 71622 00:20:00.253 11:11:08 -- common/autotest_common.sh@817 -- # '[' -z 71622 ']' 00:20:00.253 11:11:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.253 11:11:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.253 11:11:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.253 11:11:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.253 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:20:00.511 [2024-04-18 11:11:08.513044] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:00.511 [2024-04-18 11:11:08.513237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.511 [2024-04-18 11:11:08.696414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.769 [2024-04-18 11:11:08.984199] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.769 [2024-04-18 11:11:08.984525] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.769 [2024-04-18 11:11:08.984635] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.769 [2024-04-18 11:11:08.984758] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.769 [2024-04-18 11:11:08.984906] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.769 [2024-04-18 11:11:08.984975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.334 11:11:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.334 11:11:09 -- common/autotest_common.sh@850 -- # return 0 00:20:01.334 11:11:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:01.334 11:11:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:01.334 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 11:11:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.592 11:11:09 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 [2024-04-18 11:11:09.570388] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 [2024-04-18 11:11:09.586536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 NULL1 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:01.592 11:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.592 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.592 11:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.592 11:11:09 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:01.592 [2024-04-18 11:11:09.664878] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:01.593 [2024-04-18 11:11:09.664986] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71672 ] 00:20:02.167 Attached to nqn.2016-06.io.spdk:cnode1 00:20:02.167 Namespace ID: 1 size: 1GB 00:20:02.167 fused_ordering(0) 00:20:02.167 fused_ordering(1) 00:20:02.167 fused_ordering(2) 00:20:02.167 fused_ordering(3) 00:20:02.167 fused_ordering(4) 00:20:02.167 fused_ordering(5) 00:20:02.167 fused_ordering(6) 00:20:02.167 fused_ordering(7) 00:20:02.167 fused_ordering(8) 00:20:02.167 fused_ordering(9) 00:20:02.167 fused_ordering(10) 00:20:02.167 fused_ordering(11) 00:20:02.167 fused_ordering(12) 00:20:02.167 fused_ordering(13) 00:20:02.167 fused_ordering(14) 00:20:02.167 fused_ordering(15) 00:20:02.167 fused_ordering(16) 00:20:02.167 fused_ordering(17) 00:20:02.167 fused_ordering(18) 00:20:02.167 fused_ordering(19) 00:20:02.167 fused_ordering(20) 00:20:02.167 fused_ordering(21) 00:20:02.167 fused_ordering(22) 00:20:02.167 fused_ordering(23) 00:20:02.167 fused_ordering(24) 00:20:02.168 fused_ordering(25) 00:20:02.168 fused_ordering(26) 00:20:02.168 fused_ordering(27) 00:20:02.168 fused_ordering(28) 00:20:02.168 fused_ordering(29) 00:20:02.168 fused_ordering(30) 00:20:02.168 fused_ordering(31) 00:20:02.168 fused_ordering(32) 00:20:02.168 fused_ordering(33) 00:20:02.168 fused_ordering(34) 00:20:02.168 fused_ordering(35) 00:20:02.168 fused_ordering(36) 00:20:02.168 fused_ordering(37) 00:20:02.168 fused_ordering(38) 00:20:02.168 fused_ordering(39) 00:20:02.168 fused_ordering(40) 00:20:02.168 fused_ordering(41) 00:20:02.168 fused_ordering(42) 00:20:02.168 fused_ordering(43) 00:20:02.168 fused_ordering(44) 00:20:02.168 fused_ordering(45) 00:20:02.168 fused_ordering(46) 00:20:02.168 fused_ordering(47) 00:20:02.168 fused_ordering(48) 00:20:02.168 fused_ordering(49) 00:20:02.168 fused_ordering(50) 00:20:02.168 fused_ordering(51) 00:20:02.168 fused_ordering(52) 00:20:02.168 fused_ordering(53) 00:20:02.168 fused_ordering(54) 00:20:02.168 fused_ordering(55) 00:20:02.168 fused_ordering(56) 00:20:02.168 fused_ordering(57) 00:20:02.168 fused_ordering(58) 00:20:02.168 fused_ordering(59) 00:20:02.168 fused_ordering(60) 00:20:02.168 fused_ordering(61) 00:20:02.168 fused_ordering(62) 00:20:02.168 fused_ordering(63) 00:20:02.168 fused_ordering(64) 00:20:02.168 fused_ordering(65) 00:20:02.168 fused_ordering(66) 00:20:02.168 fused_ordering(67) 00:20:02.168 fused_ordering(68) 00:20:02.168 fused_ordering(69) 00:20:02.168 fused_ordering(70) 00:20:02.168 fused_ordering(71) 00:20:02.168 fused_ordering(72) 00:20:02.168 fused_ordering(73) 00:20:02.168 fused_ordering(74) 00:20:02.168 fused_ordering(75) 00:20:02.168 fused_ordering(76) 00:20:02.168 fused_ordering(77) 00:20:02.168 fused_ordering(78) 00:20:02.168 fused_ordering(79) 00:20:02.168 fused_ordering(80) 00:20:02.168 fused_ordering(81) 00:20:02.168 fused_ordering(82) 00:20:02.168 fused_ordering(83) 00:20:02.168 fused_ordering(84) 00:20:02.168 fused_ordering(85) 00:20:02.168 fused_ordering(86) 00:20:02.168 fused_ordering(87) 00:20:02.168 fused_ordering(88) 00:20:02.168 fused_ordering(89) 00:20:02.168 fused_ordering(90) 00:20:02.168 fused_ordering(91) 00:20:02.168 fused_ordering(92) 00:20:02.168 fused_ordering(93) 00:20:02.168 fused_ordering(94) 00:20:02.168 fused_ordering(95) 00:20:02.168 fused_ordering(96) 00:20:02.168 fused_ordering(97) 00:20:02.168 fused_ordering(98) 00:20:02.168 fused_ordering(99) 00:20:02.168 fused_ordering(100) 00:20:02.168 fused_ordering(101) 00:20:02.168 fused_ordering(102) 00:20:02.168 fused_ordering(103) 00:20:02.168 fused_ordering(104) 00:20:02.168 fused_ordering(105) 00:20:02.168 fused_ordering(106) 00:20:02.168 fused_ordering(107) 00:20:02.168 fused_ordering(108) 00:20:02.168 fused_ordering(109) 00:20:02.168 fused_ordering(110) 00:20:02.168 fused_ordering(111) 00:20:02.168 fused_ordering(112) 00:20:02.168 fused_ordering(113) 00:20:02.168 fused_ordering(114) 00:20:02.168 fused_ordering(115) 00:20:02.168 fused_ordering(116) 00:20:02.168 fused_ordering(117) 00:20:02.168 fused_ordering(118) 00:20:02.168 fused_ordering(119) 00:20:02.168 fused_ordering(120) 00:20:02.168 fused_ordering(121) 00:20:02.168 fused_ordering(122) 00:20:02.168 fused_ordering(123) 00:20:02.168 fused_ordering(124) 00:20:02.168 fused_ordering(125) 00:20:02.168 fused_ordering(126) 00:20:02.168 fused_ordering(127) 00:20:02.168 fused_ordering(128) 00:20:02.168 fused_ordering(129) 00:20:02.168 fused_ordering(130) 00:20:02.168 fused_ordering(131) 00:20:02.168 fused_ordering(132) 00:20:02.168 fused_ordering(133) 00:20:02.168 fused_ordering(134) 00:20:02.168 fused_ordering(135) 00:20:02.168 fused_ordering(136) 00:20:02.168 fused_ordering(137) 00:20:02.168 fused_ordering(138) 00:20:02.168 fused_ordering(139) 00:20:02.168 fused_ordering(140) 00:20:02.168 fused_ordering(141) 00:20:02.168 fused_ordering(142) 00:20:02.168 fused_ordering(143) 00:20:02.168 fused_ordering(144) 00:20:02.168 fused_ordering(145) 00:20:02.168 fused_ordering(146) 00:20:02.168 fused_ordering(147) 00:20:02.168 fused_ordering(148) 00:20:02.168 fused_ordering(149) 00:20:02.168 fused_ordering(150) 00:20:02.168 fused_ordering(151) 00:20:02.168 fused_ordering(152) 00:20:02.168 fused_ordering(153) 00:20:02.168 fused_ordering(154) 00:20:02.168 fused_ordering(155) 00:20:02.168 fused_ordering(156) 00:20:02.168 fused_ordering(157) 00:20:02.168 fused_ordering(158) 00:20:02.168 fused_ordering(159) 00:20:02.168 fused_ordering(160) 00:20:02.168 fused_ordering(161) 00:20:02.168 fused_ordering(162) 00:20:02.168 fused_ordering(163) 00:20:02.168 fused_ordering(164) 00:20:02.168 fused_ordering(165) 00:20:02.168 fused_ordering(166) 00:20:02.168 fused_ordering(167) 00:20:02.168 fused_ordering(168) 00:20:02.168 fused_ordering(169) 00:20:02.168 fused_ordering(170) 00:20:02.168 fused_ordering(171) 00:20:02.168 fused_ordering(172) 00:20:02.168 fused_ordering(173) 00:20:02.168 fused_ordering(174) 00:20:02.168 fused_ordering(175) 00:20:02.168 fused_ordering(176) 00:20:02.168 fused_ordering(177) 00:20:02.168 fused_ordering(178) 00:20:02.168 fused_ordering(179) 00:20:02.168 fused_ordering(180) 00:20:02.168 fused_ordering(181) 00:20:02.168 fused_ordering(182) 00:20:02.168 fused_ordering(183) 00:20:02.168 fused_ordering(184) 00:20:02.168 fused_ordering(185) 00:20:02.168 fused_ordering(186) 00:20:02.168 fused_ordering(187) 00:20:02.168 fused_ordering(188) 00:20:02.168 fused_ordering(189) 00:20:02.168 fused_ordering(190) 00:20:02.168 fused_ordering(191) 00:20:02.168 fused_ordering(192) 00:20:02.168 fused_ordering(193) 00:20:02.168 fused_ordering(194) 00:20:02.168 fused_ordering(195) 00:20:02.168 fused_ordering(196) 00:20:02.168 fused_ordering(197) 00:20:02.168 fused_ordering(198) 00:20:02.168 fused_ordering(199) 00:20:02.168 fused_ordering(200) 00:20:02.168 fused_ordering(201) 00:20:02.168 fused_ordering(202) 00:20:02.168 fused_ordering(203) 00:20:02.168 fused_ordering(204) 00:20:02.168 fused_ordering(205) 00:20:02.426 fused_ordering(206) 00:20:02.426 fused_ordering(207) 00:20:02.426 fused_ordering(208) 00:20:02.426 fused_ordering(209) 00:20:02.426 fused_ordering(210) 00:20:02.426 fused_ordering(211) 00:20:02.426 fused_ordering(212) 00:20:02.426 fused_ordering(213) 00:20:02.426 fused_ordering(214) 00:20:02.426 fused_ordering(215) 00:20:02.426 fused_ordering(216) 00:20:02.426 fused_ordering(217) 00:20:02.426 fused_ordering(218) 00:20:02.426 fused_ordering(219) 00:20:02.426 fused_ordering(220) 00:20:02.426 fused_ordering(221) 00:20:02.426 fused_ordering(222) 00:20:02.426 fused_ordering(223) 00:20:02.426 fused_ordering(224) 00:20:02.426 fused_ordering(225) 00:20:02.426 fused_ordering(226) 00:20:02.426 fused_ordering(227) 00:20:02.426 fused_ordering(228) 00:20:02.426 fused_ordering(229) 00:20:02.426 fused_ordering(230) 00:20:02.426 fused_ordering(231) 00:20:02.426 fused_ordering(232) 00:20:02.426 fused_ordering(233) 00:20:02.426 fused_ordering(234) 00:20:02.426 fused_ordering(235) 00:20:02.426 fused_ordering(236) 00:20:02.426 fused_ordering(237) 00:20:02.426 fused_ordering(238) 00:20:02.426 fused_ordering(239) 00:20:02.426 fused_ordering(240) 00:20:02.426 fused_ordering(241) 00:20:02.426 fused_ordering(242) 00:20:02.426 fused_ordering(243) 00:20:02.426 fused_ordering(244) 00:20:02.426 fused_ordering(245) 00:20:02.426 fused_ordering(246) 00:20:02.426 fused_ordering(247) 00:20:02.426 fused_ordering(248) 00:20:02.426 fused_ordering(249) 00:20:02.426 fused_ordering(250) 00:20:02.426 fused_ordering(251) 00:20:02.426 fused_ordering(252) 00:20:02.426 fused_ordering(253) 00:20:02.426 fused_ordering(254) 00:20:02.426 fused_ordering(255) 00:20:02.426 fused_ordering(256) 00:20:02.426 fused_ordering(257) 00:20:02.426 fused_ordering(258) 00:20:02.426 fused_ordering(259) 00:20:02.426 fused_ordering(260) 00:20:02.426 fused_ordering(261) 00:20:02.426 fused_ordering(262) 00:20:02.426 fused_ordering(263) 00:20:02.426 fused_ordering(264) 00:20:02.426 fused_ordering(265) 00:20:02.426 fused_ordering(266) 00:20:02.426 fused_ordering(267) 00:20:02.426 fused_ordering(268) 00:20:02.427 fused_ordering(269) 00:20:02.427 fused_ordering(270) 00:20:02.427 fused_ordering(271) 00:20:02.427 fused_ordering(272) 00:20:02.427 fused_ordering(273) 00:20:02.427 fused_ordering(274) 00:20:02.427 fused_ordering(275) 00:20:02.427 fused_ordering(276) 00:20:02.427 fused_ordering(277) 00:20:02.427 fused_ordering(278) 00:20:02.427 fused_ordering(279) 00:20:02.427 fused_ordering(280) 00:20:02.427 fused_ordering(281) 00:20:02.427 fused_ordering(282) 00:20:02.427 fused_ordering(283) 00:20:02.427 fused_ordering(284) 00:20:02.427 fused_ordering(285) 00:20:02.427 fused_ordering(286) 00:20:02.427 fused_ordering(287) 00:20:02.427 fused_ordering(288) 00:20:02.427 fused_ordering(289) 00:20:02.427 fused_ordering(290) 00:20:02.427 fused_ordering(291) 00:20:02.427 fused_ordering(292) 00:20:02.427 fused_ordering(293) 00:20:02.427 fused_ordering(294) 00:20:02.427 fused_ordering(295) 00:20:02.427 fused_ordering(296) 00:20:02.427 fused_ordering(297) 00:20:02.427 fused_ordering(298) 00:20:02.427 fused_ordering(299) 00:20:02.427 fused_ordering(300) 00:20:02.427 fused_ordering(301) 00:20:02.427 fused_ordering(302) 00:20:02.427 fused_ordering(303) 00:20:02.427 fused_ordering(304) 00:20:02.427 fused_ordering(305) 00:20:02.427 fused_ordering(306) 00:20:02.427 fused_ordering(307) 00:20:02.427 fused_ordering(308) 00:20:02.427 fused_ordering(309) 00:20:02.427 fused_ordering(310) 00:20:02.427 fused_ordering(311) 00:20:02.427 fused_ordering(312) 00:20:02.427 fused_ordering(313) 00:20:02.427 fused_ordering(314) 00:20:02.427 fused_ordering(315) 00:20:02.427 fused_ordering(316) 00:20:02.427 fused_ordering(317) 00:20:02.427 fused_ordering(318) 00:20:02.427 fused_ordering(319) 00:20:02.427 fused_ordering(320) 00:20:02.427 fused_ordering(321) 00:20:02.427 fused_ordering(322) 00:20:02.427 fused_ordering(323) 00:20:02.427 fused_ordering(324) 00:20:02.427 fused_ordering(325) 00:20:02.427 fused_ordering(326) 00:20:02.427 fused_ordering(327) 00:20:02.427 fused_ordering(328) 00:20:02.427 fused_ordering(329) 00:20:02.427 fused_ordering(330) 00:20:02.427 fused_ordering(331) 00:20:02.427 fused_ordering(332) 00:20:02.427 fused_ordering(333) 00:20:02.427 fused_ordering(334) 00:20:02.427 fused_ordering(335) 00:20:02.427 fused_ordering(336) 00:20:02.427 fused_ordering(337) 00:20:02.427 fused_ordering(338) 00:20:02.427 fused_ordering(339) 00:20:02.427 fused_ordering(340) 00:20:02.427 fused_ordering(341) 00:20:02.427 fused_ordering(342) 00:20:02.427 fused_ordering(343) 00:20:02.427 fused_ordering(344) 00:20:02.427 fused_ordering(345) 00:20:02.427 fused_ordering(346) 00:20:02.427 fused_ordering(347) 00:20:02.427 fused_ordering(348) 00:20:02.427 fused_ordering(349) 00:20:02.427 fused_ordering(350) 00:20:02.427 fused_ordering(351) 00:20:02.427 fused_ordering(352) 00:20:02.427 fused_ordering(353) 00:20:02.427 fused_ordering(354) 00:20:02.427 fused_ordering(355) 00:20:02.427 fused_ordering(356) 00:20:02.427 fused_ordering(357) 00:20:02.427 fused_ordering(358) 00:20:02.427 fused_ordering(359) 00:20:02.427 fused_ordering(360) 00:20:02.427 fused_ordering(361) 00:20:02.427 fused_ordering(362) 00:20:02.427 fused_ordering(363) 00:20:02.427 fused_ordering(364) 00:20:02.427 fused_ordering(365) 00:20:02.427 fused_ordering(366) 00:20:02.427 fused_ordering(367) 00:20:02.427 fused_ordering(368) 00:20:02.427 fused_ordering(369) 00:20:02.427 fused_ordering(370) 00:20:02.427 fused_ordering(371) 00:20:02.427 fused_ordering(372) 00:20:02.427 fused_ordering(373) 00:20:02.427 fused_ordering(374) 00:20:02.427 fused_ordering(375) 00:20:02.427 fused_ordering(376) 00:20:02.427 fused_ordering(377) 00:20:02.427 fused_ordering(378) 00:20:02.427 fused_ordering(379) 00:20:02.427 fused_ordering(380) 00:20:02.427 fused_ordering(381) 00:20:02.427 fused_ordering(382) 00:20:02.427 fused_ordering(383) 00:20:02.427 fused_ordering(384) 00:20:02.427 fused_ordering(385) 00:20:02.427 fused_ordering(386) 00:20:02.427 fused_ordering(387) 00:20:02.427 fused_ordering(388) 00:20:02.427 fused_ordering(389) 00:20:02.427 fused_ordering(390) 00:20:02.427 fused_ordering(391) 00:20:02.427 fused_ordering(392) 00:20:02.427 fused_ordering(393) 00:20:02.427 fused_ordering(394) 00:20:02.427 fused_ordering(395) 00:20:02.427 fused_ordering(396) 00:20:02.427 fused_ordering(397) 00:20:02.427 fused_ordering(398) 00:20:02.427 fused_ordering(399) 00:20:02.427 fused_ordering(400) 00:20:02.427 fused_ordering(401) 00:20:02.427 fused_ordering(402) 00:20:02.427 fused_ordering(403) 00:20:02.427 fused_ordering(404) 00:20:02.427 fused_ordering(405) 00:20:02.427 fused_ordering(406) 00:20:02.427 fused_ordering(407) 00:20:02.427 fused_ordering(408) 00:20:02.427 fused_ordering(409) 00:20:02.427 fused_ordering(410) 00:20:02.992 fused_ordering(411) 00:20:02.992 fused_ordering(412) 00:20:02.992 fused_ordering(413) 00:20:02.992 fused_ordering(414) 00:20:02.992 fused_ordering(415) 00:20:02.992 fused_ordering(416) 00:20:02.992 fused_ordering(417) 00:20:02.992 fused_ordering(418) 00:20:02.992 fused_ordering(419) 00:20:02.992 fused_ordering(420) 00:20:02.992 fused_ordering(421) 00:20:02.992 fused_ordering(422) 00:20:02.992 fused_ordering(423) 00:20:02.992 fused_ordering(424) 00:20:02.992 fused_ordering(425) 00:20:02.992 fused_ordering(426) 00:20:02.992 fused_ordering(427) 00:20:02.992 fused_ordering(428) 00:20:02.992 fused_ordering(429) 00:20:02.992 fused_ordering(430) 00:20:02.992 fused_ordering(431) 00:20:02.992 fused_ordering(432) 00:20:02.992 fused_ordering(433) 00:20:02.992 fused_ordering(434) 00:20:02.992 fused_ordering(435) 00:20:02.992 fused_ordering(436) 00:20:02.992 fused_ordering(437) 00:20:02.992 fused_ordering(438) 00:20:02.992 fused_ordering(439) 00:20:02.992 fused_ordering(440) 00:20:02.992 fused_ordering(441) 00:20:02.992 fused_ordering(442) 00:20:02.992 fused_ordering(443) 00:20:02.992 fused_ordering(444) 00:20:02.992 fused_ordering(445) 00:20:02.992 fused_ordering(446) 00:20:02.992 fused_ordering(447) 00:20:02.992 fused_ordering(448) 00:20:02.992 fused_ordering(449) 00:20:02.992 fused_ordering(450) 00:20:02.992 fused_ordering(451) 00:20:02.992 fused_ordering(452) 00:20:02.992 fused_ordering(453) 00:20:02.992 fused_ordering(454) 00:20:02.992 fused_ordering(455) 00:20:02.992 fused_ordering(456) 00:20:02.992 fused_ordering(457) 00:20:02.992 fused_ordering(458) 00:20:02.992 fused_ordering(459) 00:20:02.992 fused_ordering(460) 00:20:02.992 fused_ordering(461) 00:20:02.992 fused_ordering(462) 00:20:02.992 fused_ordering(463) 00:20:02.992 fused_ordering(464) 00:20:02.992 fused_ordering(465) 00:20:02.992 fused_ordering(466) 00:20:02.992 fused_ordering(467) 00:20:02.992 fused_ordering(468) 00:20:02.992 fused_ordering(469) 00:20:02.992 fused_ordering(470) 00:20:02.992 fused_ordering(471) 00:20:02.992 fused_ordering(472) 00:20:02.992 fused_ordering(473) 00:20:02.992 fused_ordering(474) 00:20:02.992 fused_ordering(475) 00:20:02.992 fused_ordering(476) 00:20:02.992 fused_ordering(477) 00:20:02.992 fused_ordering(478) 00:20:02.992 fused_ordering(479) 00:20:02.992 fused_ordering(480) 00:20:02.992 fused_ordering(481) 00:20:02.992 fused_ordering(482) 00:20:02.992 fused_ordering(483) 00:20:02.992 fused_ordering(484) 00:20:02.992 fused_ordering(485) 00:20:02.992 fused_ordering(486) 00:20:02.992 fused_ordering(487) 00:20:02.992 fused_ordering(488) 00:20:02.992 fused_ordering(489) 00:20:02.992 fused_ordering(490) 00:20:02.992 fused_ordering(491) 00:20:02.992 fused_ordering(492) 00:20:02.992 fused_ordering(493) 00:20:02.992 fused_ordering(494) 00:20:02.992 fused_ordering(495) 00:20:02.992 fused_ordering(496) 00:20:02.992 fused_ordering(497) 00:20:02.992 fused_ordering(498) 00:20:02.992 fused_ordering(499) 00:20:02.992 fused_ordering(500) 00:20:02.992 fused_ordering(501) 00:20:02.992 fused_ordering(502) 00:20:02.992 fused_ordering(503) 00:20:02.992 fused_ordering(504) 00:20:02.992 fused_ordering(505) 00:20:02.992 fused_ordering(506) 00:20:02.992 fused_ordering(507) 00:20:02.992 fused_ordering(508) 00:20:02.992 fused_ordering(509) 00:20:02.992 fused_ordering(510) 00:20:02.992 fused_ordering(511) 00:20:02.992 fused_ordering(512) 00:20:02.992 fused_ordering(513) 00:20:02.992 fused_ordering(514) 00:20:02.992 fused_ordering(515) 00:20:02.992 fused_ordering(516) 00:20:02.992 fused_ordering(517) 00:20:02.992 fused_ordering(518) 00:20:02.992 fused_ordering(519) 00:20:02.992 fused_ordering(520) 00:20:02.992 fused_ordering(521) 00:20:02.992 fused_ordering(522) 00:20:02.992 fused_ordering(523) 00:20:02.992 fused_ordering(524) 00:20:02.992 fused_ordering(525) 00:20:02.992 fused_ordering(526) 00:20:02.992 fused_ordering(527) 00:20:02.992 fused_ordering(528) 00:20:02.992 fused_ordering(529) 00:20:02.992 fused_ordering(530) 00:20:02.992 fused_ordering(531) 00:20:02.992 fused_ordering(532) 00:20:02.992 fused_ordering(533) 00:20:02.992 fused_ordering(534) 00:20:02.992 fused_ordering(535) 00:20:02.992 fused_ordering(536) 00:20:02.992 fused_ordering(537) 00:20:02.992 fused_ordering(538) 00:20:02.992 fused_ordering(539) 00:20:02.992 fused_ordering(540) 00:20:02.992 fused_ordering(541) 00:20:02.992 fused_ordering(542) 00:20:02.992 fused_ordering(543) 00:20:02.992 fused_ordering(544) 00:20:02.992 fused_ordering(545) 00:20:02.992 fused_ordering(546) 00:20:02.992 fused_ordering(547) 00:20:02.992 fused_ordering(548) 00:20:02.992 fused_ordering(549) 00:20:02.992 fused_ordering(550) 00:20:02.992 fused_ordering(551) 00:20:02.992 fused_ordering(552) 00:20:02.992 fused_ordering(553) 00:20:02.992 fused_ordering(554) 00:20:02.992 fused_ordering(555) 00:20:02.992 fused_ordering(556) 00:20:02.992 fused_ordering(557) 00:20:02.992 fused_ordering(558) 00:20:02.992 fused_ordering(559) 00:20:02.992 fused_ordering(560) 00:20:02.992 fused_ordering(561) 00:20:02.992 fused_ordering(562) 00:20:02.992 fused_ordering(563) 00:20:02.992 fused_ordering(564) 00:20:02.992 fused_ordering(565) 00:20:02.992 fused_ordering(566) 00:20:02.992 fused_ordering(567) 00:20:02.992 fused_ordering(568) 00:20:02.992 fused_ordering(569) 00:20:02.992 fused_ordering(570) 00:20:02.992 fused_ordering(571) 00:20:02.992 fused_ordering(572) 00:20:02.992 fused_ordering(573) 00:20:02.992 fused_ordering(574) 00:20:02.992 fused_ordering(575) 00:20:02.993 fused_ordering(576) 00:20:02.993 fused_ordering(577) 00:20:02.993 fused_ordering(578) 00:20:02.993 fused_ordering(579) 00:20:02.993 fused_ordering(580) 00:20:02.993 fused_ordering(581) 00:20:02.993 fused_ordering(582) 00:20:02.993 fused_ordering(583) 00:20:02.993 fused_ordering(584) 00:20:02.993 fused_ordering(585) 00:20:02.993 fused_ordering(586) 00:20:02.993 fused_ordering(587) 00:20:02.993 fused_ordering(588) 00:20:02.993 fused_ordering(589) 00:20:02.993 fused_ordering(590) 00:20:02.993 fused_ordering(591) 00:20:02.993 fused_ordering(592) 00:20:02.993 fused_ordering(593) 00:20:02.993 fused_ordering(594) 00:20:02.993 fused_ordering(595) 00:20:02.993 fused_ordering(596) 00:20:02.993 fused_ordering(597) 00:20:02.993 fused_ordering(598) 00:20:02.993 fused_ordering(599) 00:20:02.993 fused_ordering(600) 00:20:02.993 fused_ordering(601) 00:20:02.993 fused_ordering(602) 00:20:02.993 fused_ordering(603) 00:20:02.993 fused_ordering(604) 00:20:02.993 fused_ordering(605) 00:20:02.993 fused_ordering(606) 00:20:02.993 fused_ordering(607) 00:20:02.993 fused_ordering(608) 00:20:02.993 fused_ordering(609) 00:20:02.993 fused_ordering(610) 00:20:02.993 fused_ordering(611) 00:20:02.993 fused_ordering(612) 00:20:02.993 fused_ordering(613) 00:20:02.993 fused_ordering(614) 00:20:02.993 fused_ordering(615) 00:20:03.925 fused_ordering(616) 00:20:03.925 fused_ordering(617) 00:20:03.925 fused_ordering(618) 00:20:03.925 fused_ordering(619) 00:20:03.925 fused_ordering(620) 00:20:03.925 fused_ordering(621) 00:20:03.925 fused_ordering(622) 00:20:03.925 fused_ordering(623) 00:20:03.925 fused_ordering(624) 00:20:03.925 fused_ordering(625) 00:20:03.925 fused_ordering(626) 00:20:03.925 fused_ordering(627) 00:20:03.925 fused_ordering(628) 00:20:03.925 fused_ordering(629) 00:20:03.925 fused_ordering(630) 00:20:03.925 fused_ordering(631) 00:20:03.925 fused_ordering(632) 00:20:03.925 fused_ordering(633) 00:20:03.925 fused_ordering(634) 00:20:03.925 fused_ordering(635) 00:20:03.925 fused_ordering(636) 00:20:03.925 fused_ordering(637) 00:20:03.925 fused_ordering(638) 00:20:03.925 fused_ordering(639) 00:20:03.925 fused_ordering(640) 00:20:03.925 fused_ordering(641) 00:20:03.925 fused_ordering(642) 00:20:03.925 fused_ordering(643) 00:20:03.925 fused_ordering(644) 00:20:03.925 fused_ordering(645) 00:20:03.925 fused_ordering(646) 00:20:03.925 fused_ordering(647) 00:20:03.925 fused_ordering(648) 00:20:03.925 fused_ordering(649) 00:20:03.925 fused_ordering(650) 00:20:03.925 fused_ordering(651) 00:20:03.925 fused_ordering(652) 00:20:03.925 fused_ordering(653) 00:20:03.925 fused_ordering(654) 00:20:03.925 fused_ordering(655) 00:20:03.925 fused_ordering(656) 00:20:03.925 fused_ordering(657) 00:20:03.925 fused_ordering(658) 00:20:03.925 fused_ordering(659) 00:20:03.925 fused_ordering(660) 00:20:03.925 fused_ordering(661) 00:20:03.925 fused_ordering(662) 00:20:03.925 fused_ordering(663) 00:20:03.925 fused_ordering(664) 00:20:03.925 fused_ordering(665) 00:20:03.925 fused_ordering(666) 00:20:03.925 fused_ordering(667) 00:20:03.925 fused_ordering(668) 00:20:03.925 fused_ordering(669) 00:20:03.925 fused_ordering(670) 00:20:03.925 fused_ordering(671) 00:20:03.925 fused_ordering(672) 00:20:03.925 fused_ordering(673) 00:20:03.925 fused_ordering(674) 00:20:03.925 fused_ordering(675) 00:20:03.925 fused_ordering(676) 00:20:03.925 fused_ordering(677) 00:20:03.925 fused_ordering(678) 00:20:03.925 fused_ordering(679) 00:20:03.925 fused_ordering(680) 00:20:03.925 fused_ordering(681) 00:20:03.925 fused_ordering(682) 00:20:03.925 fused_ordering(683) 00:20:03.925 fused_ordering(684) 00:20:03.925 fused_ordering(685) 00:20:03.925 fused_ordering(686) 00:20:03.925 fused_ordering(687) 00:20:03.925 fused_ordering(688) 00:20:03.925 fused_ordering(689) 00:20:03.925 fused_ordering(690) 00:20:03.925 fused_ordering(691) 00:20:03.925 fused_ordering(692) 00:20:03.925 fused_ordering(693) 00:20:03.925 fused_ordering(694) 00:20:03.925 fused_ordering(695) 00:20:03.925 fused_ordering(696) 00:20:03.925 fused_ordering(697) 00:20:03.925 fused_ordering(698) 00:20:03.925 fused_ordering(699) 00:20:03.925 fused_ordering(700) 00:20:03.925 fused_ordering(701) 00:20:03.925 fused_ordering(702) 00:20:03.925 fused_ordering(703) 00:20:03.925 fused_ordering(704) 00:20:03.925 fused_ordering(705) 00:20:03.925 fused_ordering(706) 00:20:03.925 fused_ordering(707) 00:20:03.925 fused_ordering(708) 00:20:03.925 fused_ordering(709) 00:20:03.925 fused_ordering(710) 00:20:03.925 fused_ordering(711) 00:20:03.925 fused_ordering(712) 00:20:03.925 fused_ordering(713) 00:20:03.925 fused_ordering(714) 00:20:03.925 fused_ordering(715) 00:20:03.925 fused_ordering(716) 00:20:03.925 fused_ordering(717) 00:20:03.925 fused_ordering(718) 00:20:03.925 fused_ordering(719) 00:20:03.925 fused_ordering(720) 00:20:03.925 fused_ordering(721) 00:20:03.925 fused_ordering(722) 00:20:03.925 fused_ordering(723) 00:20:03.925 fused_ordering(724) 00:20:03.925 fused_ordering(725) 00:20:03.925 fused_ordering(726) 00:20:03.925 fused_ordering(727) 00:20:03.925 fused_ordering(728) 00:20:03.925 fused_ordering(729) 00:20:03.925 fused_ordering(730) 00:20:03.925 fused_ordering(731) 00:20:03.925 fused_ordering(732) 00:20:03.925 fused_ordering(733) 00:20:03.925 fused_ordering(734) 00:20:03.925 fused_ordering(735) 00:20:03.925 fused_ordering(736) 00:20:03.925 fused_ordering(737) 00:20:03.925 fused_ordering(738) 00:20:03.925 fused_ordering(739) 00:20:03.925 fused_ordering(740) 00:20:03.925 fused_ordering(741) 00:20:03.925 fused_ordering(742) 00:20:03.925 fused_ordering(743) 00:20:03.925 fused_ordering(744) 00:20:03.925 fused_ordering(745) 00:20:03.925 fused_ordering(746) 00:20:03.925 fused_ordering(747) 00:20:03.925 fused_ordering(748) 00:20:03.925 fused_ordering(749) 00:20:03.925 fused_ordering(750) 00:20:03.925 fused_ordering(751) 00:20:03.925 fused_ordering(752) 00:20:03.925 fused_ordering(753) 00:20:03.925 fused_ordering(754) 00:20:03.925 fused_ordering(755) 00:20:03.925 fused_ordering(756) 00:20:03.925 fused_ordering(757) 00:20:03.925 fused_ordering(758) 00:20:03.925 fused_ordering(759) 00:20:03.925 fused_ordering(760) 00:20:03.925 fused_ordering(761) 00:20:03.925 fused_ordering(762) 00:20:03.925 fused_ordering(763) 00:20:03.925 fused_ordering(764) 00:20:03.925 fused_ordering(765) 00:20:03.925 fused_ordering(766) 00:20:03.925 fused_ordering(767) 00:20:03.925 fused_ordering(768) 00:20:03.925 fused_ordering(769) 00:20:03.925 fused_ordering(770) 00:20:03.925 fused_ordering(771) 00:20:03.925 fused_ordering(772) 00:20:03.925 fused_ordering(773) 00:20:03.925 fused_ordering(774) 00:20:03.925 fused_ordering(775) 00:20:03.925 fused_ordering(776) 00:20:03.925 fused_ordering(777) 00:20:03.925 fused_ordering(778) 00:20:03.925 fused_ordering(779) 00:20:03.925 fused_ordering(780) 00:20:03.925 fused_ordering(781) 00:20:03.925 fused_ordering(782) 00:20:03.925 fused_ordering(783) 00:20:03.925 fused_ordering(784) 00:20:03.925 fused_ordering(785) 00:20:03.925 fused_ordering(786) 00:20:03.925 fused_ordering(787) 00:20:03.925 fused_ordering(788) 00:20:03.925 fused_ordering(789) 00:20:03.925 fused_ordering(790) 00:20:03.925 fused_ordering(791) 00:20:03.925 fused_ordering(792) 00:20:03.925 fused_ordering(793) 00:20:03.925 fused_ordering(794) 00:20:03.925 fused_ordering(795) 00:20:03.926 fused_ordering(796) 00:20:03.926 fused_ordering(797) 00:20:03.926 fused_ordering(798) 00:20:03.926 fused_ordering(799) 00:20:03.926 fused_ordering(800) 00:20:03.926 fused_ordering(801) 00:20:03.926 fused_ordering(802) 00:20:03.926 fused_ordering(803) 00:20:03.926 fused_ordering(804) 00:20:03.926 fused_ordering(805) 00:20:03.926 fused_ordering(806) 00:20:03.926 fused_ordering(807) 00:20:03.926 fused_ordering(808) 00:20:03.926 fused_ordering(809) 00:20:03.926 fused_ordering(810) 00:20:03.926 fused_ordering(811) 00:20:03.926 fused_ordering(812) 00:20:03.926 fused_ordering(813) 00:20:03.926 fused_ordering(814) 00:20:03.926 fused_ordering(815) 00:20:03.926 fused_ordering(816) 00:20:03.926 fused_ordering(817) 00:20:03.926 fused_ordering(818) 00:20:03.926 fused_ordering(819) 00:20:03.926 fused_ordering(820) 00:20:04.858 fused_ordering(821) 00:20:04.858 fused_ordering(822) 00:20:04.858 fused_ordering(823) 00:20:04.858 fused_ordering(824) 00:20:04.858 fused_ordering(825) 00:20:04.858 fused_ordering(826) 00:20:04.858 fused_ordering(827) 00:20:04.858 fused_ordering(828) 00:20:04.858 fused_ordering(829) 00:20:04.858 fused_ordering(830) 00:20:04.858 fused_ordering(831) 00:20:04.858 fused_ordering(832) 00:20:04.858 fused_ordering(833) 00:20:04.858 fused_ordering(834) 00:20:04.858 fused_ordering(835) 00:20:04.858 fused_ordering(836) 00:20:04.858 fused_ordering(837) 00:20:04.858 fused_ordering(838) 00:20:04.858 fused_ordering(839) 00:20:04.858 fused_ordering(840) 00:20:04.858 fused_ordering(841) 00:20:04.858 fused_ordering(842) 00:20:04.858 fused_ordering(843) 00:20:04.858 fused_ordering(844) 00:20:04.858 fused_ordering(845) 00:20:04.858 fused_ordering(846) 00:20:04.858 fused_ordering(847) 00:20:04.858 fused_ordering(848) 00:20:04.858 fused_ordering(849) 00:20:04.858 fused_ordering(850) 00:20:04.858 fused_ordering(851) 00:20:04.858 fused_ordering(852) 00:20:04.858 fused_ordering(853) 00:20:04.858 fused_ordering(854) 00:20:04.858 fused_ordering(855) 00:20:04.858 fused_ordering(856) 00:20:04.858 fused_ordering(857) 00:20:04.858 fused_ordering(858) 00:20:04.858 fused_ordering(859) 00:20:04.858 fused_ordering(860) 00:20:04.858 fused_ordering(861) 00:20:04.858 fused_ordering(862) 00:20:04.858 fused_ordering(863) 00:20:04.858 fused_ordering(864) 00:20:04.858 fused_ordering(865) 00:20:04.858 fused_ordering(866) 00:20:04.858 fused_ordering(867) 00:20:04.858 fused_ordering(868) 00:20:04.858 fused_ordering(869) 00:20:04.858 fused_ordering(870) 00:20:04.858 fused_ordering(871) 00:20:04.858 fused_ordering(872) 00:20:04.858 fused_ordering(873) 00:20:04.858 fused_ordering(874) 00:20:04.858 fused_ordering(875) 00:20:04.858 fused_ordering(876) 00:20:04.858 fused_ordering(877) 00:20:04.858 fused_ordering(878) 00:20:04.858 fused_ordering(879) 00:20:04.858 fused_ordering(880) 00:20:04.858 fused_ordering(881) 00:20:04.858 fused_ordering(882) 00:20:04.858 fused_ordering(883) 00:20:04.858 fused_ordering(884) 00:20:04.858 fused_ordering(885) 00:20:04.858 fused_ordering(886) 00:20:04.858 fused_ordering(887) 00:20:04.858 fused_ordering(888) 00:20:04.858 fused_ordering(889) 00:20:04.858 fused_ordering(890) 00:20:04.858 fused_ordering(891) 00:20:04.858 fused_ordering(892) 00:20:04.858 fused_ordering(893) 00:20:04.858 fused_ordering(894) 00:20:04.858 fused_ordering(895) 00:20:04.858 fused_ordering(896) 00:20:04.858 fused_ordering(897) 00:20:04.858 fused_ordering(898) 00:20:04.858 fused_ordering(899) 00:20:04.858 fused_ordering(900) 00:20:04.858 fused_ordering(901) 00:20:04.858 fused_ordering(902) 00:20:04.858 fused_ordering(903) 00:20:04.858 fused_ordering(904) 00:20:04.858 fused_ordering(905) 00:20:04.858 fused_ordering(906) 00:20:04.858 fused_ordering(907) 00:20:04.858 fused_ordering(908) 00:20:04.858 fused_ordering(909) 00:20:04.858 fused_ordering(910) 00:20:04.858 fused_ordering(911) 00:20:04.858 fused_ordering(912) 00:20:04.858 fused_ordering(913) 00:20:04.858 fused_ordering(914) 00:20:04.858 fused_ordering(915) 00:20:04.858 fused_ordering(916) 00:20:04.858 fused_ordering(917) 00:20:04.858 fused_ordering(918) 00:20:04.858 fused_ordering(919) 00:20:04.858 fused_ordering(920) 00:20:04.858 fused_ordering(921) 00:20:04.858 fused_ordering(922) 00:20:04.858 fused_ordering(923) 00:20:04.858 fused_ordering(924) 00:20:04.858 fused_ordering(925) 00:20:04.858 fused_ordering(926) 00:20:04.858 fused_ordering(927) 00:20:04.858 fused_ordering(928) 00:20:04.858 fused_ordering(929) 00:20:04.858 fused_ordering(930) 00:20:04.858 fused_ordering(931) 00:20:04.858 fused_ordering(932) 00:20:04.858 fused_ordering(933) 00:20:04.858 fused_ordering(934) 00:20:04.858 fused_ordering(935) 00:20:04.858 fused_ordering(936) 00:20:04.858 fused_ordering(937) 00:20:04.858 fused_ordering(938) 00:20:04.858 fused_ordering(939) 00:20:04.858 fused_ordering(940) 00:20:04.858 fused_ordering(941) 00:20:04.858 fused_ordering(942) 00:20:04.858 fused_ordering(943) 00:20:04.858 fused_ordering(944) 00:20:04.858 fused_ordering(945) 00:20:04.858 fused_ordering(946) 00:20:04.858 fused_ordering(947) 00:20:04.858 fused_ordering(948) 00:20:04.858 fused_ordering(949) 00:20:04.858 fused_ordering(950) 00:20:04.858 fused_ordering(951) 00:20:04.858 fused_ordering(952) 00:20:04.858 fused_ordering(953) 00:20:04.858 fused_ordering(954) 00:20:04.858 fused_ordering(955) 00:20:04.858 fused_ordering(956) 00:20:04.858 fused_ordering(957) 00:20:04.858 fused_ordering(958) 00:20:04.858 fused_ordering(959) 00:20:04.858 fused_ordering(960) 00:20:04.858 fused_ordering(961) 00:20:04.858 fused_ordering(962) 00:20:04.858 fused_ordering(963) 00:20:04.858 fused_ordering(964) 00:20:04.858 fused_ordering(965) 00:20:04.858 fused_ordering(966) 00:20:04.858 fused_ordering(967) 00:20:04.858 fused_ordering(968) 00:20:04.858 fused_ordering(969) 00:20:04.858 fused_ordering(970) 00:20:04.858 fused_ordering(971) 00:20:04.858 fused_ordering(972) 00:20:04.858 fused_ordering(973) 00:20:04.858 fused_ordering(974) 00:20:04.858 fused_ordering(975) 00:20:04.858 fused_ordering(976) 00:20:04.858 fused_ordering(977) 00:20:04.858 fused_ordering(978) 00:20:04.858 fused_ordering(979) 00:20:04.858 fused_ordering(980) 00:20:04.858 fused_ordering(981) 00:20:04.858 fused_ordering(982) 00:20:04.858 fused_ordering(983) 00:20:04.858 fused_ordering(984) 00:20:04.858 fused_ordering(985) 00:20:04.858 fused_ordering(986) 00:20:04.858 fused_ordering(987) 00:20:04.858 fused_ordering(988) 00:20:04.858 fused_ordering(989) 00:20:04.858 fused_ordering(990) 00:20:04.858 fused_ordering(991) 00:20:04.858 fused_ordering(992) 00:20:04.858 fused_ordering(993) 00:20:04.858 fused_ordering(994) 00:20:04.858 fused_ordering(995) 00:20:04.858 fused_ordering(996) 00:20:04.858 fused_ordering(997) 00:20:04.858 fused_ordering(998) 00:20:04.858 fused_ordering(999) 00:20:04.858 fused_ordering(1000) 00:20:04.859 fused_ordering(1001) 00:20:04.859 fused_ordering(1002) 00:20:04.859 fused_ordering(1003) 00:20:04.859 fused_ordering(1004) 00:20:04.859 fused_ordering(1005) 00:20:04.859 fused_ordering(1006) 00:20:04.859 fused_ordering(1007) 00:20:04.859 fused_ordering(1008) 00:20:04.859 fused_ordering(1009) 00:20:04.859 fused_ordering(1010) 00:20:04.859 fused_ordering(1011) 00:20:04.859 fused_ordering(1012) 00:20:04.859 fused_ordering(1013) 00:20:04.859 fused_ordering(1014) 00:20:04.859 fused_ordering(1015) 00:20:04.859 fused_ordering(1016) 00:20:04.859 fused_ordering(1017) 00:20:04.859 fused_ordering(1018) 00:20:04.859 fused_ordering(1019) 00:20:04.859 fused_ordering(1020) 00:20:04.859 fused_ordering(1021) 00:20:04.859 fused_ordering(1022) 00:20:04.859 fused_ordering(1023) 00:20:04.859 11:11:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:04.859 11:11:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:04.859 11:11:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:04.859 11:11:12 -- nvmf/common.sh@117 -- # sync 00:20:04.859 11:11:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.859 11:11:12 -- nvmf/common.sh@120 -- # set +e 00:20:04.859 11:11:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.859 11:11:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.859 rmmod nvme_tcp 00:20:04.859 rmmod nvme_fabrics 00:20:04.859 rmmod nvme_keyring 00:20:04.859 11:11:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.859 11:11:12 -- nvmf/common.sh@124 -- # set -e 00:20:04.859 11:11:12 -- nvmf/common.sh@125 -- # return 0 00:20:04.859 11:11:12 -- nvmf/common.sh@478 -- # '[' -n 71622 ']' 00:20:04.859 11:11:12 -- nvmf/common.sh@479 -- # killprocess 71622 00:20:04.859 11:11:12 -- common/autotest_common.sh@936 -- # '[' -z 71622 ']' 00:20:04.859 11:11:12 -- common/autotest_common.sh@940 -- # kill -0 71622 00:20:04.859 11:11:12 -- common/autotest_common.sh@941 -- # uname 00:20:04.859 11:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.859 11:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71622 00:20:04.859 11:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:04.859 killing process with pid 71622 00:20:04.859 11:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:04.859 11:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71622' 00:20:04.859 11:11:12 -- common/autotest_common.sh@955 -- # kill 71622 00:20:04.859 11:11:12 -- common/autotest_common.sh@960 -- # wait 71622 00:20:06.241 11:11:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.241 11:11:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.241 11:11:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.241 11:11:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.241 11:11:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.241 11:11:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.241 11:11:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.241 11:11:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.241 11:11:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.241 00:20:06.241 real 0m6.475s 00:20:06.241 user 0m7.817s 00:20:06.241 sys 0m1.840s 00:20:06.241 11:11:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.241 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.241 ************************************ 00:20:06.241 END TEST nvmf_fused_ordering 00:20:06.241 ************************************ 00:20:06.241 11:11:14 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:06.241 11:11:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.241 11:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.241 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 ************************************ 00:20:06.499 START TEST nvmf_delete_subsystem 00:20:06.499 ************************************ 00:20:06.499 11:11:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:06.499 * Looking for test storage... 00:20:06.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:06.499 11:11:14 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.499 11:11:14 -- nvmf/common.sh@7 -- # uname -s 00:20:06.499 11:11:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.499 11:11:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.499 11:11:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.499 11:11:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.499 11:11:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.499 11:11:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.499 11:11:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.499 11:11:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.499 11:11:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.499 11:11:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:06.499 11:11:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:06.499 11:11:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.499 11:11:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.499 11:11:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.499 11:11:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.499 11:11:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.499 11:11:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.499 11:11:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.499 11:11:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.499 11:11:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.499 11:11:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.499 11:11:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.499 11:11:14 -- paths/export.sh@5 -- # export PATH 00:20:06.499 11:11:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.499 11:11:14 -- nvmf/common.sh@47 -- # : 0 00:20:06.499 11:11:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.499 11:11:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.499 11:11:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.499 11:11:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.499 11:11:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.499 11:11:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.499 11:11:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.499 11:11:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.499 11:11:14 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:06.499 11:11:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:06.499 11:11:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.499 11:11:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:06.499 11:11:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:06.499 11:11:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:06.499 11:11:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.499 11:11:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.499 11:11:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.499 11:11:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:06.499 11:11:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:06.499 11:11:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.499 11:11:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.499 11:11:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.499 11:11:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:06.499 11:11:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.500 11:11:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.500 11:11:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.500 11:11:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.500 11:11:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.500 11:11:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.500 11:11:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.500 11:11:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.500 11:11:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:06.500 11:11:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:06.500 Cannot find device "nvmf_tgt_br" 00:20:06.500 11:11:14 -- nvmf/common.sh@155 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.500 Cannot find device "nvmf_tgt_br2" 00:20:06.500 11:11:14 -- nvmf/common.sh@156 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:06.500 11:11:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:06.500 Cannot find device "nvmf_tgt_br" 00:20:06.500 11:11:14 -- nvmf/common.sh@158 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:06.500 Cannot find device "nvmf_tgt_br2" 00:20:06.500 11:11:14 -- nvmf/common.sh@159 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:06.500 11:11:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:06.500 11:11:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.500 11:11:14 -- nvmf/common.sh@162 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.500 11:11:14 -- nvmf/common.sh@163 -- # true 00:20:06.500 11:11:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.500 11:11:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.500 11:11:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.500 11:11:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.500 11:11:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.757 11:11:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.757 11:11:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.757 11:11:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.758 11:11:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.758 11:11:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:06.758 11:11:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:06.758 11:11:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:06.758 11:11:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:06.758 11:11:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.758 11:11:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.758 11:11:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.758 11:11:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:06.758 11:11:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:06.758 11:11:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.758 11:11:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.758 11:11:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.758 11:11:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.758 11:11:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.758 11:11:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:06.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:20:06.758 00:20:06.758 --- 10.0.0.2 ping statistics --- 00:20:06.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.758 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:06.758 11:11:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:06.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:06.758 00:20:06.758 --- 10.0.0.3 ping statistics --- 00:20:06.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.758 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:06.758 11:11:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:20:06.758 00:20:06.758 --- 10.0.0.1 ping statistics --- 00:20:06.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.758 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:20:06.758 11:11:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.758 11:11:14 -- nvmf/common.sh@422 -- # return 0 00:20:06.758 11:11:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:06.758 11:11:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.758 11:11:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:06.758 11:11:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:06.758 11:11:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.758 11:11:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:06.758 11:11:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:06.758 11:11:14 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:06.758 11:11:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:06.758 11:11:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:06.758 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.758 11:11:14 -- nvmf/common.sh@470 -- # nvmfpid=71923 00:20:06.758 11:11:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:06.758 11:11:14 -- nvmf/common.sh@471 -- # waitforlisten 71923 00:20:06.758 11:11:14 -- common/autotest_common.sh@817 -- # '[' -z 71923 ']' 00:20:06.758 11:11:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.758 11:11:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.758 11:11:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.758 11:11:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.758 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:20:07.016 [2024-04-18 11:11:15.026587] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:07.016 [2024-04-18 11:11:15.026806] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.016 [2024-04-18 11:11:15.207083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:07.581 [2024-04-18 11:11:15.554319] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.581 [2024-04-18 11:11:15.554656] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.581 [2024-04-18 11:11:15.554696] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.581 [2024-04-18 11:11:15.554732] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.581 [2024-04-18 11:11:15.554753] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.581 [2024-04-18 11:11:15.554975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.581 [2024-04-18 11:11:15.555208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.839 11:11:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.839 11:11:15 -- common/autotest_common.sh@850 -- # return 0 00:20:07.839 11:11:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:07.839 11:11:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:07.839 11:11:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 11:11:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.839 11:11:16 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.839 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.839 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 [2024-04-18 11:11:16.023147] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.839 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.839 11:11:16 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:07.839 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.839 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.839 11:11:16 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.839 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.839 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 [2024-04-18 11:11:16.045286] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.839 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.839 11:11:16 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:07.839 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.839 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.839 NULL1 00:20:07.839 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.839 11:11:16 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:07.839 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.839 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:08.096 Delay0 00:20:08.096 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.096 11:11:16 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.096 11:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.096 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:08.096 11:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.096 11:11:16 -- target/delete_subsystem.sh@28 -- # perf_pid=71974 00:20:08.096 11:11:16 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:08.096 11:11:16 -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:08.096 [2024-04-18 11:11:16.288146] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.001 11:11:18 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.001 11:11:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.001 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 [2024-04-18 11:11:18.346444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Write completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 Read completed with error (sct=0, sc=8) 00:20:10.260 starting I/O failed: -6 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 starting I/O failed: -6 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 Read completed with error (sct=0, sc=8) 00:20:10.261 starting I/O failed: -6 00:20:10.261 Write completed with error (sct=0, sc=8) 00:20:10.261 starting I/O failed: -6 00:20:10.261 starting I/O failed: -6 00:20:11.197 [2024-04-18 11:11:19.308381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 [2024-04-18 11:11:19.343333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010440 is same with the state(5) to be set 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 [2024-04-18 11:11:19.345160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 [2024-04-18 11:11:19.346052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Write completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 Read completed with error (sct=0, sc=8) 00:20:11.197 [2024-04-18 11:11:19.346871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:20:11.197 11:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.197 11:11:19 -- target/delete_subsystem.sh@34 -- # delay=0 00:20:11.197 11:11:19 -- target/delete_subsystem.sh@35 -- # kill -0 71974 00:20:11.197 11:11:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:11.197 [2024-04-18 11:11:19.352048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:20:11.197 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:11.197 Initializing NVMe Controllers 00:20:11.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.197 Controller IO queue size 128, less than required. 00:20:11.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:11.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:11.197 Initialization complete. Launching workers. 00:20:11.197 ======================================================== 00:20:11.197 Latency(us) 00:20:11.197 Device Information : IOPS MiB/s Average min max 00:20:11.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 192.19 0.09 953321.25 2847.20 1018975.52 00:20:11.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.64 0.08 881920.88 2482.30 1020550.87 00:20:11.197 ======================================================== 00:20:11.197 Total : 346.84 0.17 921486.04 2482.30 1020550.87 00:20:11.197 00:20:11.763 11:11:19 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:11.763 11:11:19 -- target/delete_subsystem.sh@35 -- # kill -0 71974 00:20:11.763 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71974) - No such process 00:20:11.763 11:11:19 -- target/delete_subsystem.sh@45 -- # NOT wait 71974 00:20:11.763 11:11:19 -- common/autotest_common.sh@638 -- # local es=0 00:20:11.763 11:11:19 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 71974 00:20:11.763 11:11:19 -- common/autotest_common.sh@626 -- # local arg=wait 00:20:11.763 11:11:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.763 11:11:19 -- common/autotest_common.sh@630 -- # type -t wait 00:20:11.763 11:11:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.763 11:11:19 -- common/autotest_common.sh@641 -- # wait 71974 00:20:11.763 11:11:19 -- common/autotest_common.sh@641 -- # es=1 00:20:11.763 11:11:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:11.763 11:11:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:11.763 11:11:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:11.763 11:11:19 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:11.763 11:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.763 11:11:19 -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 11:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.763 11:11:19 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.763 11:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.763 11:11:19 -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 [2024-04-18 11:11:19.874873] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.763 11:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:11.764 11:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.764 11:11:19 -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 11:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@54 -- # perf_pid=72015 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@56 -- # delay=0 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:11.764 11:11:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:12.021 [2024-04-18 11:11:20.096343] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:12.278 11:11:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:12.278 11:11:20 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:12.278 11:11:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:12.852 11:11:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:12.852 11:11:20 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:12.852 11:11:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:13.418 11:11:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:13.418 11:11:21 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:13.418 11:11:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:13.986 11:11:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:13.986 11:11:21 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:13.986 11:11:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:14.244 11:11:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:14.244 11:11:22 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:14.244 11:11:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:14.812 11:11:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:14.812 11:11:22 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:14.812 11:11:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:15.078 Initializing NVMe Controllers 00:20:15.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.078 Controller IO queue size 128, less than required. 00:20:15.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:15.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:15.078 Initialization complete. Launching workers. 00:20:15.078 ======================================================== 00:20:15.078 Latency(us) 00:20:15.078 Device Information : IOPS MiB/s Average min max 00:20:15.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1007988.07 1000218.53 1042520.02 00:20:15.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004111.26 1000211.46 1013953.62 00:20:15.078 ======================================================== 00:20:15.078 Total : 256.00 0.12 1006049.67 1000211.46 1042520.02 00:20:15.078 00:20:15.340 11:11:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:15.340 11:11:23 -- target/delete_subsystem.sh@57 -- # kill -0 72015 00:20:15.340 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (72015) - No such process 00:20:15.340 11:11:23 -- target/delete_subsystem.sh@67 -- # wait 72015 00:20:15.340 11:11:23 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:15.340 11:11:23 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:15.340 11:11:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.340 11:11:23 -- nvmf/common.sh@117 -- # sync 00:20:15.340 11:11:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.340 11:11:23 -- nvmf/common.sh@120 -- # set +e 00:20:15.340 11:11:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.340 11:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.340 rmmod nvme_tcp 00:20:15.340 rmmod nvme_fabrics 00:20:15.340 rmmod nvme_keyring 00:20:15.340 11:11:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.340 11:11:23 -- nvmf/common.sh@124 -- # set -e 00:20:15.340 11:11:23 -- nvmf/common.sh@125 -- # return 0 00:20:15.340 11:11:23 -- nvmf/common.sh@478 -- # '[' -n 71923 ']' 00:20:15.340 11:11:23 -- nvmf/common.sh@479 -- # killprocess 71923 00:20:15.340 11:11:23 -- common/autotest_common.sh@936 -- # '[' -z 71923 ']' 00:20:15.340 11:11:23 -- common/autotest_common.sh@940 -- # kill -0 71923 00:20:15.340 11:11:23 -- common/autotest_common.sh@941 -- # uname 00:20:15.340 11:11:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.340 11:11:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71923 00:20:15.340 killing process with pid 71923 00:20:15.340 11:11:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.340 11:11:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.340 11:11:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71923' 00:20:15.340 11:11:23 -- common/autotest_common.sh@955 -- # kill 71923 00:20:15.340 11:11:23 -- common/autotest_common.sh@960 -- # wait 71923 00:20:16.713 11:11:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:16.713 11:11:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:16.713 11:11:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:16.713 11:11:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.713 11:11:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.713 11:11:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.713 11:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.713 11:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.713 11:11:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:16.713 00:20:16.713 real 0m10.323s 00:20:16.713 user 0m30.099s 00:20:16.713 sys 0m1.695s 00:20:16.713 ************************************ 00:20:16.713 END TEST nvmf_delete_subsystem 00:20:16.713 ************************************ 00:20:16.713 11:11:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.713 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:16.713 11:11:24 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:16.713 11:11:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.713 11:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.713 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:16.971 ************************************ 00:20:16.971 START TEST nvmf_ns_masking 00:20:16.971 ************************************ 00:20:16.971 11:11:24 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:16.971 * Looking for test storage... 00:20:16.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:16.971 11:11:25 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.971 11:11:25 -- nvmf/common.sh@7 -- # uname -s 00:20:16.971 11:11:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.971 11:11:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.971 11:11:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.971 11:11:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.971 11:11:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.971 11:11:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.971 11:11:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.971 11:11:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.971 11:11:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.971 11:11:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:16.971 11:11:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:16.971 11:11:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.971 11:11:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.971 11:11:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.971 11:11:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.971 11:11:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.971 11:11:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.971 11:11:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.971 11:11:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.971 11:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.971 11:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.971 11:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.971 11:11:25 -- paths/export.sh@5 -- # export PATH 00:20:16.971 11:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.971 11:11:25 -- nvmf/common.sh@47 -- # : 0 00:20:16.971 11:11:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.971 11:11:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.971 11:11:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.971 11:11:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.971 11:11:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.971 11:11:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.971 11:11:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.971 11:11:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.971 11:11:25 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.971 11:11:25 -- target/ns_masking.sh@11 -- # loops=5 00:20:16.971 11:11:25 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:16.971 11:11:25 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:20:16.971 11:11:25 -- target/ns_masking.sh@15 -- # uuidgen 00:20:16.971 11:11:25 -- target/ns_masking.sh@15 -- # HOSTID=cd2668a4-1046-42d7-b407-4671765f7ee2 00:20:16.971 11:11:25 -- target/ns_masking.sh@44 -- # nvmftestinit 00:20:16.971 11:11:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:16.971 11:11:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.971 11:11:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:16.971 11:11:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:16.971 11:11:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:16.971 11:11:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.971 11:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.971 11:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.971 11:11:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:16.971 11:11:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:16.971 11:11:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.971 11:11:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.971 11:11:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.971 11:11:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:16.971 11:11:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.971 11:11:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.971 11:11:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.971 11:11:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.971 11:11:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.972 11:11:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.972 11:11:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.972 11:11:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.972 11:11:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:16.972 11:11:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:16.972 Cannot find device "nvmf_tgt_br" 00:20:16.972 11:11:25 -- nvmf/common.sh@155 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.972 Cannot find device "nvmf_tgt_br2" 00:20:16.972 11:11:25 -- nvmf/common.sh@156 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:16.972 11:11:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:16.972 Cannot find device "nvmf_tgt_br" 00:20:16.972 11:11:25 -- nvmf/common.sh@158 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:16.972 Cannot find device "nvmf_tgt_br2" 00:20:16.972 11:11:25 -- nvmf/common.sh@159 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:16.972 11:11:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:16.972 11:11:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.972 11:11:25 -- nvmf/common.sh@162 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.972 11:11:25 -- nvmf/common.sh@163 -- # true 00:20:16.972 11:11:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.229 11:11:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.229 11:11:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.229 11:11:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.229 11:11:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.229 11:11:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.229 11:11:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.229 11:11:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.229 11:11:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:17.229 11:11:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:17.229 11:11:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:17.229 11:11:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:17.229 11:11:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:17.229 11:11:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.229 11:11:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.229 11:11:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.229 11:11:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:17.229 11:11:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:17.229 11:11:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.229 11:11:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.229 11:11:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.229 11:11:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.229 11:11:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.229 11:11:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:17.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:17.229 00:20:17.229 --- 10.0.0.2 ping statistics --- 00:20:17.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.229 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:17.230 11:11:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:17.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:17.230 00:20:17.230 --- 10.0.0.3 ping statistics --- 00:20:17.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.230 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:17.230 11:11:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:17.230 00:20:17.230 --- 10.0.0.1 ping statistics --- 00:20:17.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.230 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:17.230 11:11:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.230 11:11:25 -- nvmf/common.sh@422 -- # return 0 00:20:17.230 11:11:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:17.230 11:11:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.230 11:11:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:17.230 11:11:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:17.230 11:11:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.230 11:11:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:17.230 11:11:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:17.230 11:11:25 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:20:17.230 11:11:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:17.230 11:11:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.230 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:17.230 11:11:25 -- nvmf/common.sh@470 -- # nvmfpid=72282 00:20:17.230 11:11:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.230 11:11:25 -- nvmf/common.sh@471 -- # waitforlisten 72282 00:20:17.230 11:11:25 -- common/autotest_common.sh@817 -- # '[' -z 72282 ']' 00:20:17.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.230 11:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.230 11:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.230 11:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.230 11:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.230 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:17.488 [2024-04-18 11:11:25.517388] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:17.488 [2024-04-18 11:11:25.517767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.488 [2024-04-18 11:11:25.695525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.746 [2024-04-18 11:11:25.962454] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.746 [2024-04-18 11:11:25.962832] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.746 [2024-04-18 11:11:25.963000] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.746 [2024-04-18 11:11:25.963022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.746 [2024-04-18 11:11:25.963037] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.746 [2024-04-18 11:11:25.963246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.746 [2024-04-18 11:11:25.963655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.746 [2024-04-18 11:11:25.963872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.746 [2024-04-18 11:11:25.963883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.314 11:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:18.314 11:11:26 -- common/autotest_common.sh@850 -- # return 0 00:20:18.314 11:11:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:18.314 11:11:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:18.314 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:20:18.314 11:11:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.314 11:11:26 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:18.572 [2024-04-18 11:11:26.779262] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.830 11:11:26 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:20:18.830 11:11:26 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:20:18.830 11:11:26 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:19.088 Malloc1 00:20:19.088 11:11:27 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:19.346 Malloc2 00:20:19.346 11:11:27 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:19.605 11:11:27 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:19.879 11:11:28 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.161 [2024-04-18 11:11:28.293517] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.161 11:11:28 -- target/ns_masking.sh@61 -- # connect 00:20:20.161 11:11:28 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd2668a4-1046-42d7-b407-4671765f7ee2 -a 10.0.0.2 -s 4420 -i 4 00:20:20.419 11:11:28 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:20:20.419 11:11:28 -- common/autotest_common.sh@1184 -- # local i=0 00:20:20.419 11:11:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:20.419 11:11:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:20.419 11:11:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:22.317 11:11:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:22.317 11:11:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:22.317 11:11:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:22.317 11:11:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:22.317 11:11:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:22.317 11:11:30 -- common/autotest_common.sh@1194 -- # return 0 00:20:22.317 11:11:30 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:22.317 11:11:30 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:22.317 11:11:30 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:22.317 11:11:30 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:22.317 11:11:30 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:20:22.317 11:11:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:22.317 11:11:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:22.317 [ 0]:0x1 00:20:22.317 11:11:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:22.317 11:11:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:22.575 11:11:30 -- target/ns_masking.sh@40 -- # nguid=a825cd5bb28e472dbed53d6f57e4866d 00:20:22.575 11:11:30 -- target/ns_masking.sh@41 -- # [[ a825cd5bb28e472dbed53d6f57e4866d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:22.575 11:11:30 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:22.835 11:11:30 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:20:22.835 11:11:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:22.835 11:11:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:22.835 [ 0]:0x1 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # nguid=a825cd5bb28e472dbed53d6f57e4866d 00:20:22.835 11:11:30 -- target/ns_masking.sh@41 -- # [[ a825cd5bb28e472dbed53d6f57e4866d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:22.835 11:11:30 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:20:22.835 11:11:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:22.835 11:11:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:22.835 [ 1]:0x2 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:22.835 11:11:30 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:22.835 11:11:30 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:22.835 11:11:30 -- target/ns_masking.sh@69 -- # disconnect 00:20:22.835 11:11:30 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:22.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.835 11:11:31 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.401 11:11:31 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:23.401 11:11:31 -- target/ns_masking.sh@77 -- # connect 1 00:20:23.401 11:11:31 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd2668a4-1046-42d7-b407-4671765f7ee2 -a 10.0.0.2 -s 4420 -i 4 00:20:23.659 11:11:31 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:23.659 11:11:31 -- common/autotest_common.sh@1184 -- # local i=0 00:20:23.659 11:11:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:23.659 11:11:31 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:20:23.659 11:11:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:20:23.659 11:11:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:25.557 11:11:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:25.557 11:11:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:25.557 11:11:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:25.557 11:11:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:25.557 11:11:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:25.557 11:11:33 -- common/autotest_common.sh@1194 -- # return 0 00:20:25.557 11:11:33 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:25.557 11:11:33 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:25.557 11:11:33 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:25.557 11:11:33 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:25.557 11:11:33 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:20:25.557 11:11:33 -- common/autotest_common.sh@638 -- # local es=0 00:20:25.557 11:11:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:25.557 11:11:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:25.557 11:11:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:25.557 11:11:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:25.557 11:11:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:25.557 11:11:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:25.557 11:11:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:25.557 11:11:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:25.557 11:11:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:25.557 11:11:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:25.815 11:11:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:25.815 11:11:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:25.815 11:11:33 -- common/autotest_common.sh@641 -- # es=1 00:20:25.815 11:11:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:25.815 11:11:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:25.815 11:11:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:25.815 11:11:33 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:20:25.815 11:11:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:25.815 11:11:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:25.815 [ 0]:0x2 00:20:25.815 11:11:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:25.815 11:11:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:25.815 11:11:33 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:25.815 11:11:33 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:25.815 11:11:33 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:26.073 11:11:34 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:20:26.073 11:11:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:26.073 11:11:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:26.073 [ 0]:0x1 00:20:26.073 11:11:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:26.073 11:11:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:26.073 11:11:34 -- target/ns_masking.sh@40 -- # nguid=a825cd5bb28e472dbed53d6f57e4866d 00:20:26.073 11:11:34 -- target/ns_masking.sh@41 -- # [[ a825cd5bb28e472dbed53d6f57e4866d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:26.073 11:11:34 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:20:26.073 11:11:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:26.073 11:11:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:26.073 [ 1]:0x2 00:20:26.073 11:11:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:26.073 11:11:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:26.331 11:11:34 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:26.331 11:11:34 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:26.331 11:11:34 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:26.588 11:11:34 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:20:26.588 11:11:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:26.588 11:11:34 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:26.588 11:11:34 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:26.588 11:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.588 11:11:34 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:26.588 11:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.588 11:11:34 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:26.589 11:11:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:26.589 11:11:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:26.589 11:11:34 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:26.589 11:11:34 -- common/autotest_common.sh@641 -- # es=1 00:20:26.589 11:11:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:26.589 11:11:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:26.589 11:11:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:26.589 11:11:34 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:20:26.589 11:11:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:26.589 11:11:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:26.589 [ 0]:0x2 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:26.589 11:11:34 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:26.589 11:11:34 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:26.589 11:11:34 -- target/ns_masking.sh@91 -- # disconnect 00:20:26.589 11:11:34 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:26.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:26.589 11:11:34 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:26.846 11:11:35 -- target/ns_masking.sh@95 -- # connect 2 00:20:26.846 11:11:35 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd2668a4-1046-42d7-b407-4671765f7ee2 -a 10.0.0.2 -s 4420 -i 4 00:20:27.106 11:11:35 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:27.106 11:11:35 -- common/autotest_common.sh@1184 -- # local i=0 00:20:27.106 11:11:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:27.106 11:11:35 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:20:27.106 11:11:35 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:20:27.106 11:11:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:29.009 11:11:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:29.009 11:11:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:29.009 11:11:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:29.009 11:11:37 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:20:29.009 11:11:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:29.009 11:11:37 -- common/autotest_common.sh@1194 -- # return 0 00:20:29.009 11:11:37 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:29.009 11:11:37 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:29.266 11:11:37 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:29.266 11:11:37 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:29.266 11:11:37 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:20:29.266 11:11:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:29.266 11:11:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:29.266 [ 0]:0x1 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # nguid=a825cd5bb28e472dbed53d6f57e4866d 00:20:29.266 11:11:37 -- target/ns_masking.sh@41 -- # [[ a825cd5bb28e472dbed53d6f57e4866d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:29.266 11:11:37 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:20:29.266 11:11:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:29.266 11:11:37 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:29.266 [ 1]:0x2 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:29.266 11:11:37 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:29.266 11:11:37 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:29.266 11:11:37 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:29.523 11:11:37 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:20:29.523 11:11:37 -- common/autotest_common.sh@638 -- # local es=0 00:20:29.523 11:11:37 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:29.523 11:11:37 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:29.523 11:11:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:29.523 11:11:37 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:29.780 11:11:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:29.780 11:11:37 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:29.780 11:11:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:29.780 11:11:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:29.780 11:11:37 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:29.780 11:11:37 -- common/autotest_common.sh@641 -- # es=1 00:20:29.780 11:11:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:29.780 11:11:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:29.780 11:11:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:29.780 11:11:37 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:20:29.780 11:11:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:29.780 11:11:37 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:29.780 [ 0]:0x2 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:29.780 11:11:37 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:29.780 11:11:37 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:29.780 11:11:37 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:29.780 11:11:37 -- common/autotest_common.sh@638 -- # local es=0 00:20:29.780 11:11:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:29.780 11:11:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.780 11:11:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:29.780 11:11:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.780 11:11:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:29.780 11:11:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.780 11:11:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:29.780 11:11:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.780 11:11:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:29.780 11:11:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:30.037 [2024-04-18 11:11:38.173698] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:30.037 2024/04/18 11:11:38 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:20:30.037 request: 00:20:30.037 { 00:20:30.037 "method": "nvmf_ns_remove_host", 00:20:30.037 "params": { 00:20:30.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.037 "nsid": 2, 00:20:30.037 "host": "nqn.2016-06.io.spdk:host1" 00:20:30.037 } 00:20:30.037 } 00:20:30.037 Got JSON-RPC error response 00:20:30.037 GoRPCClient: error on JSON-RPC call 00:20:30.037 11:11:38 -- common/autotest_common.sh@641 -- # es=1 00:20:30.037 11:11:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:30.037 11:11:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:30.037 11:11:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:30.037 11:11:38 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:20:30.037 11:11:38 -- common/autotest_common.sh@638 -- # local es=0 00:20:30.037 11:11:38 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:30.037 11:11:38 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:30.037 11:11:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:30.037 11:11:38 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:30.037 11:11:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:30.037 11:11:38 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:30.037 11:11:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:30.037 11:11:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:30.037 11:11:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:30.037 11:11:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:30.294 11:11:38 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:30.294 11:11:38 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.294 11:11:38 -- common/autotest_common.sh@641 -- # es=1 00:20:30.294 11:11:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:30.294 11:11:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:30.294 11:11:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:30.294 11:11:38 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:20:30.294 11:11:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:30.294 11:11:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:30.294 [ 0]:0x2 00:20:30.294 11:11:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:30.294 11:11:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:30.294 11:11:38 -- target/ns_masking.sh@40 -- # nguid=114875fbcdcc4a1abab7ddd126974655 00:20:30.294 11:11:38 -- target/ns_masking.sh@41 -- # [[ 114875fbcdcc4a1abab7ddd126974655 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.294 11:11:38 -- target/ns_masking.sh@108 -- # disconnect 00:20:30.294 11:11:38 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:30.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:30.294 11:11:38 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.552 11:11:38 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:30.553 11:11:38 -- target/ns_masking.sh@114 -- # nvmftestfini 00:20:30.553 11:11:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:30.553 11:11:38 -- nvmf/common.sh@117 -- # sync 00:20:30.553 11:11:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.553 11:11:38 -- nvmf/common.sh@120 -- # set +e 00:20:30.553 11:11:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.553 11:11:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.553 rmmod nvme_tcp 00:20:30.553 rmmod nvme_fabrics 00:20:30.553 rmmod nvme_keyring 00:20:30.553 11:11:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.553 11:11:38 -- nvmf/common.sh@124 -- # set -e 00:20:30.553 11:11:38 -- nvmf/common.sh@125 -- # return 0 00:20:30.553 11:11:38 -- nvmf/common.sh@478 -- # '[' -n 72282 ']' 00:20:30.553 11:11:38 -- nvmf/common.sh@479 -- # killprocess 72282 00:20:30.553 11:11:38 -- common/autotest_common.sh@936 -- # '[' -z 72282 ']' 00:20:30.553 11:11:38 -- common/autotest_common.sh@940 -- # kill -0 72282 00:20:30.553 11:11:38 -- common/autotest_common.sh@941 -- # uname 00:20:30.553 11:11:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.553 11:11:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72282 00:20:30.553 killing process with pid 72282 00:20:30.553 11:11:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:30.553 11:11:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:30.553 11:11:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72282' 00:20:30.553 11:11:38 -- common/autotest_common.sh@955 -- # kill 72282 00:20:30.553 11:11:38 -- common/autotest_common.sh@960 -- # wait 72282 00:20:32.460 11:11:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:32.460 11:11:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:32.460 11:11:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:32.460 11:11:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.460 11:11:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.460 11:11:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.460 11:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.460 11:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.460 11:11:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:32.460 ************************************ 00:20:32.460 END TEST nvmf_ns_masking 00:20:32.460 ************************************ 00:20:32.460 00:20:32.460 real 0m15.683s 00:20:32.460 user 1m0.888s 00:20:32.460 sys 0m2.647s 00:20:32.460 11:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:32.460 11:11:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.460 11:11:40 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:20:32.460 11:11:40 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:20:32.460 11:11:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:32.460 11:11:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:32.460 11:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:32.460 11:11:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.747 ************************************ 00:20:32.747 START TEST nvmf_host_management 00:20:32.747 ************************************ 00:20:32.747 11:11:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:32.747 * Looking for test storage... 00:20:32.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:32.747 11:11:40 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.747 11:11:40 -- nvmf/common.sh@7 -- # uname -s 00:20:32.747 11:11:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.747 11:11:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.747 11:11:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.747 11:11:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.747 11:11:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.747 11:11:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.747 11:11:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.747 11:11:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.747 11:11:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.747 11:11:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:32.747 11:11:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:32.747 11:11:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.747 11:11:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.747 11:11:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.747 11:11:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.747 11:11:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.747 11:11:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.747 11:11:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.747 11:11:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.747 11:11:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.747 11:11:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.747 11:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.747 11:11:40 -- paths/export.sh@5 -- # export PATH 00:20:32.747 11:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.747 11:11:40 -- nvmf/common.sh@47 -- # : 0 00:20:32.747 11:11:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.747 11:11:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.747 11:11:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.747 11:11:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.747 11:11:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.747 11:11:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.747 11:11:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.747 11:11:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.747 11:11:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.747 11:11:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.747 11:11:40 -- target/host_management.sh@105 -- # nvmftestinit 00:20:32.747 11:11:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:32.747 11:11:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.747 11:11:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:32.747 11:11:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:32.747 11:11:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:32.747 11:11:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.747 11:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.747 11:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.747 11:11:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:32.747 11:11:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:32.747 11:11:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.747 11:11:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.747 11:11:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:32.747 11:11:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:32.747 11:11:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.747 11:11:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.747 11:11:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.747 11:11:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.747 11:11:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.747 11:11:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.747 11:11:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.747 11:11:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.747 11:11:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:32.747 11:11:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:32.747 Cannot find device "nvmf_tgt_br" 00:20:32.747 11:11:40 -- nvmf/common.sh@155 -- # true 00:20:32.747 11:11:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.747 Cannot find device "nvmf_tgt_br2" 00:20:32.747 11:11:40 -- nvmf/common.sh@156 -- # true 00:20:32.747 11:11:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:32.747 11:11:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:32.747 Cannot find device "nvmf_tgt_br" 00:20:32.747 11:11:40 -- nvmf/common.sh@158 -- # true 00:20:32.748 11:11:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:32.748 Cannot find device "nvmf_tgt_br2" 00:20:32.748 11:11:40 -- nvmf/common.sh@159 -- # true 00:20:32.748 11:11:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:32.748 11:11:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:33.006 11:11:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.006 11:11:40 -- nvmf/common.sh@162 -- # true 00:20:33.006 11:11:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.006 11:11:40 -- nvmf/common.sh@163 -- # true 00:20:33.006 11:11:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.006 11:11:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.006 11:11:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.006 11:11:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.006 11:11:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.006 11:11:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.006 11:11:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.006 11:11:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:33.006 11:11:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:33.006 11:11:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:33.006 11:11:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:33.006 11:11:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:33.006 11:11:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:33.006 11:11:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.006 11:11:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.006 11:11:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:33.006 11:11:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:33.006 11:11:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:33.006 11:11:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:33.006 11:11:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:33.006 11:11:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:33.006 11:11:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:33.006 11:11:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:33.006 11:11:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:33.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:33.006 00:20:33.006 --- 10.0.0.2 ping statistics --- 00:20:33.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.006 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:33.006 11:11:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:33.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:33.006 00:20:33.006 --- 10.0.0.3 ping statistics --- 00:20:33.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.006 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:33.006 11:11:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:33.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:20:33.006 00:20:33.006 --- 10.0.0.1 ping statistics --- 00:20:33.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.006 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:33.006 11:11:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.006 11:11:41 -- nvmf/common.sh@422 -- # return 0 00:20:33.006 11:11:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:33.006 11:11:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.006 11:11:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:33.006 11:11:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:33.006 11:11:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.006 11:11:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:33.006 11:11:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:33.006 11:11:41 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:20:33.006 11:11:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:33.006 11:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.006 11:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.264 ************************************ 00:20:33.265 START TEST nvmf_host_management 00:20:33.265 ************************************ 00:20:33.265 11:11:41 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:20:33.265 11:11:41 -- target/host_management.sh@69 -- # starttarget 00:20:33.265 11:11:41 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:33.265 11:11:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:33.265 11:11:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:33.265 11:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.265 11:11:41 -- nvmf/common.sh@470 -- # nvmfpid=72869 00:20:33.265 11:11:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:33.265 11:11:41 -- nvmf/common.sh@471 -- # waitforlisten 72869 00:20:33.265 11:11:41 -- common/autotest_common.sh@817 -- # '[' -z 72869 ']' 00:20:33.265 11:11:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.265 11:11:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:33.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.265 11:11:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.265 11:11:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:33.265 11:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.265 [2024-04-18 11:11:41.386900] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:33.265 [2024-04-18 11:11:41.387056] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.523 [2024-04-18 11:11:41.553889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.780 [2024-04-18 11:11:41.818806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.780 [2024-04-18 11:11:41.818878] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.780 [2024-04-18 11:11:41.818899] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.780 [2024-04-18 11:11:41.818913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.780 [2024-04-18 11:11:41.818927] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.780 [2024-04-18 11:11:41.819821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.780 [2024-04-18 11:11:41.819955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.780 [2024-04-18 11:11:41.820069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.780 [2024-04-18 11:11:41.820100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.419 11:11:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:34.419 11:11:42 -- common/autotest_common.sh@850 -- # return 0 00:20:34.419 11:11:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:34.419 11:11:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 11:11:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.419 11:11:42 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.419 11:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 [2024-04-18 11:11:42.359111] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.419 11:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.419 11:11:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:34.419 11:11:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 11:11:42 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:34.419 11:11:42 -- target/host_management.sh@23 -- # cat 00:20:34.419 11:11:42 -- target/host_management.sh@30 -- # rpc_cmd 00:20:34.419 11:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 Malloc0 00:20:34.419 [2024-04-18 11:11:42.495367] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.419 11:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.419 11:11:42 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:34.419 11:11:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 11:11:42 -- target/host_management.sh@73 -- # perfpid=72951 00:20:34.419 11:11:42 -- target/host_management.sh@74 -- # waitforlisten 72951 /var/tmp/bdevperf.sock 00:20:34.419 11:11:42 -- common/autotest_common.sh@817 -- # '[' -z 72951 ']' 00:20:34.419 11:11:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.419 11:11:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.419 11:11:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.419 11:11:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.419 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.419 11:11:42 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:34.419 11:11:42 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:34.419 11:11:42 -- nvmf/common.sh@521 -- # config=() 00:20:34.419 11:11:42 -- nvmf/common.sh@521 -- # local subsystem config 00:20:34.419 11:11:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.419 11:11:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.419 { 00:20:34.419 "params": { 00:20:34.419 "name": "Nvme$subsystem", 00:20:34.419 "trtype": "$TEST_TRANSPORT", 00:20:34.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.419 "adrfam": "ipv4", 00:20:34.419 "trsvcid": "$NVMF_PORT", 00:20:34.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.419 "hdgst": ${hdgst:-false}, 00:20:34.419 "ddgst": ${ddgst:-false} 00:20:34.419 }, 00:20:34.419 "method": "bdev_nvme_attach_controller" 00:20:34.419 } 00:20:34.419 EOF 00:20:34.419 )") 00:20:34.419 11:11:42 -- nvmf/common.sh@543 -- # cat 00:20:34.419 11:11:42 -- nvmf/common.sh@545 -- # jq . 00:20:34.419 11:11:42 -- nvmf/common.sh@546 -- # IFS=, 00:20:34.419 11:11:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:34.419 "params": { 00:20:34.419 "name": "Nvme0", 00:20:34.419 "trtype": "tcp", 00:20:34.419 "traddr": "10.0.0.2", 00:20:34.419 "adrfam": "ipv4", 00:20:34.419 "trsvcid": "4420", 00:20:34.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.419 "hdgst": false, 00:20:34.419 "ddgst": false 00:20:34.419 }, 00:20:34.419 "method": "bdev_nvme_attach_controller" 00:20:34.419 }' 00:20:34.419 [2024-04-18 11:11:42.634695] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:34.419 [2024-04-18 11:11:42.634832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72951 ] 00:20:34.677 [2024-04-18 11:11:42.804611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.936 [2024-04-18 11:11:43.059169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.504 Running I/O for 10 seconds... 00:20:35.504 11:11:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.504 11:11:43 -- common/autotest_common.sh@850 -- # return 0 00:20:35.504 11:11:43 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:35.504 11:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.504 11:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.504 11:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.504 11:11:43 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.504 11:11:43 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:35.504 11:11:43 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:35.504 11:11:43 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:35.504 11:11:43 -- target/host_management.sh@52 -- # local ret=1 00:20:35.504 11:11:43 -- target/host_management.sh@53 -- # local i 00:20:35.504 11:11:43 -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:35.504 11:11:43 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:35.504 11:11:43 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:35.504 11:11:43 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:35.504 11:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.504 11:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.504 11:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.504 11:11:43 -- target/host_management.sh@55 -- # read_io_count=195 00:20:35.504 11:11:43 -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:20:35.504 11:11:43 -- target/host_management.sh@59 -- # ret=0 00:20:35.504 11:11:43 -- target/host_management.sh@60 -- # break 00:20:35.504 11:11:43 -- target/host_management.sh@64 -- # return 0 00:20:35.504 11:11:43 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:35.504 11:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.504 11:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.504 [2024-04-18 11:11:43.684842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.684951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.684968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.684986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.504 [2024-04-18 11:11:43.685546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.504 [2024-04-18 11:11:43.685562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.685974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.505 [2024-04-18 11:11:43.686753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.505 [2024-04-18 11:11:43.686768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.506 [2024-04-18 11:11:43.686798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.686814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.506 [2024-04-18 11:11:43.686828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.686844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.506 [2024-04-18 11:11:43.686858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.506 [2024-04-18 11:11:43.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.686905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.506 [2024-04-18 11:11:43.686919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.687225] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:20:35.506 task offset: 35968 on job bdev=Nvme0n1 fails 00:20:35.506 00:20:35.506 Latency(us) 00:20:35.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.506 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.506 Job: Nvme0n1 ended in about 0.22 seconds with error 00:20:35.506 Verification LBA range: start 0x0 length 0x400 00:20:35.506 Nvme0n1 : 0.22 1148.88 71.81 287.22 0.00 42054.89 3187.43 42181.35 00:20:35.506 =================================================================================================================== 00:20:35.506 Total : 1148.88 71.81 287.22 0.00 42054.89 3187.43 42181.35 00:20:35.506 [2024-04-18 11:11:43.688522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.506 [2024-04-18 11:11:43.693756] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:35.506 [2024-04-18 11:11:43.693804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:20:35.506 11:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.506 11:11:43 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:35.506 11:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.506 11:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.506 [2024-04-18 11:11:43.696500] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:20:35.506 [2024-04-18 11:11:43.696635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:35.506 [2024-04-18 11:11:43.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.506 [2024-04-18 11:11:43.696702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:20:35.506 [2024-04-18 11:11:43.696718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:20:35.506 [2024-04-18 11:11:43.696735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:35.506 [2024-04-18 11:11:43.696755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000005a40 00:20:35.506 [2024-04-18 11:11:43.696808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:20:35.506 [2024-04-18 11:11:43.696835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.506 [2024-04-18 11:11:43.696850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.506 [2024-04-18 11:11:43.696866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.506 [2024-04-18 11:11:43.696893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.506 11:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.506 11:11:43 -- target/host_management.sh@87 -- # sleep 1 00:20:36.903 11:11:44 -- target/host_management.sh@91 -- # kill -9 72951 00:20:36.903 11:11:44 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:36.903 11:11:44 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:36.903 11:11:44 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:36.903 11:11:44 -- nvmf/common.sh@521 -- # config=() 00:20:36.903 11:11:44 -- nvmf/common.sh@521 -- # local subsystem config 00:20:36.903 11:11:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.903 11:11:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.903 { 00:20:36.903 "params": { 00:20:36.903 "name": "Nvme$subsystem", 00:20:36.903 "trtype": "$TEST_TRANSPORT", 00:20:36.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.903 "adrfam": "ipv4", 00:20:36.903 "trsvcid": "$NVMF_PORT", 00:20:36.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.903 "hdgst": ${hdgst:-false}, 00:20:36.903 "ddgst": ${ddgst:-false} 00:20:36.903 }, 00:20:36.903 "method": "bdev_nvme_attach_controller" 00:20:36.903 } 00:20:36.903 EOF 00:20:36.903 )") 00:20:36.903 11:11:44 -- nvmf/common.sh@543 -- # cat 00:20:36.903 11:11:44 -- nvmf/common.sh@545 -- # jq . 00:20:36.903 11:11:44 -- nvmf/common.sh@546 -- # IFS=, 00:20:36.903 11:11:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:36.903 "params": { 00:20:36.903 "name": "Nvme0", 00:20:36.903 "trtype": "tcp", 00:20:36.903 "traddr": "10.0.0.2", 00:20:36.903 "adrfam": "ipv4", 00:20:36.903 "trsvcid": "4420", 00:20:36.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.903 "hdgst": false, 00:20:36.903 "ddgst": false 00:20:36.903 }, 00:20:36.903 "method": "bdev_nvme_attach_controller" 00:20:36.903 }' 00:20:36.903 [2024-04-18 11:11:44.799864] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:36.903 [2024-04-18 11:11:44.800011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72997 ] 00:20:36.903 [2024-04-18 11:11:44.960143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.161 [2024-04-18 11:11:45.199181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.421 Running I/O for 1 seconds... 00:20:38.797 00:20:38.797 Latency(us) 00:20:38.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.797 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.797 Verification LBA range: start 0x0 length 0x400 00:20:38.797 Nvme0n1 : 1.03 1369.88 85.62 0.00 0.00 45845.22 8221.79 44087.85 00:20:38.797 =================================================================================================================== 00:20:38.797 Total : 1369.88 85.62 0.00 0.00 45845.22 8221.79 44087.85 00:20:39.731 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 72951 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:20:39.731 11:11:47 -- target/host_management.sh@102 -- # stoptarget 00:20:39.731 11:11:47 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:39.731 11:11:47 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:20:39.731 11:11:47 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:39.731 11:11:47 -- target/host_management.sh@40 -- # nvmftestfini 00:20:39.731 11:11:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.731 11:11:47 -- nvmf/common.sh@117 -- # sync 00:20:39.731 11:11:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.731 11:11:47 -- nvmf/common.sh@120 -- # set +e 00:20:39.731 11:11:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.731 11:11:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.731 rmmod nvme_tcp 00:20:39.731 rmmod nvme_fabrics 00:20:39.731 rmmod nvme_keyring 00:20:39.731 11:11:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.732 11:11:47 -- nvmf/common.sh@124 -- # set -e 00:20:39.732 11:11:47 -- nvmf/common.sh@125 -- # return 0 00:20:39.732 11:11:47 -- nvmf/common.sh@478 -- # '[' -n 72869 ']' 00:20:39.732 11:11:47 -- nvmf/common.sh@479 -- # killprocess 72869 00:20:39.732 11:11:47 -- common/autotest_common.sh@936 -- # '[' -z 72869 ']' 00:20:39.732 11:11:47 -- common/autotest_common.sh@940 -- # kill -0 72869 00:20:39.732 11:11:47 -- common/autotest_common.sh@941 -- # uname 00:20:39.732 11:11:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.732 11:11:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72869 00:20:39.732 killing process with pid 72869 00:20:39.732 11:11:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.732 11:11:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.732 11:11:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72869' 00:20:39.732 11:11:47 -- common/autotest_common.sh@955 -- # kill 72869 00:20:39.732 11:11:47 -- common/autotest_common.sh@960 -- # wait 72869 00:20:41.106 [2024-04-18 11:11:49.185864] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:41.106 11:11:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:41.106 11:11:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:41.106 11:11:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:41.106 11:11:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.106 11:11:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.106 11:11:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.106 11:11:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.106 11:11:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.106 11:11:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:41.106 00:20:41.106 real 0m8.028s 00:20:41.106 user 0m33.671s 00:20:41.106 sys 0m1.390s 00:20:41.106 11:11:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.106 11:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.106 ************************************ 00:20:41.106 END TEST nvmf_host_management 00:20:41.106 ************************************ 00:20:41.365 11:11:49 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:41.365 00:20:41.365 real 0m8.627s 00:20:41.365 user 0m33.813s 00:20:41.365 sys 0m1.674s 00:20:41.365 11:11:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.365 11:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 ************************************ 00:20:41.365 END TEST nvmf_host_management 00:20:41.365 ************************************ 00:20:41.365 11:11:49 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:41.365 11:11:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:41.365 11:11:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.365 11:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.365 ************************************ 00:20:41.365 START TEST nvmf_lvol 00:20:41.365 ************************************ 00:20:41.365 11:11:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:41.365 * Looking for test storage... 00:20:41.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.365 11:11:49 -- nvmf/common.sh@7 -- # uname -s 00:20:41.365 11:11:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.365 11:11:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.365 11:11:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.365 11:11:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.365 11:11:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.365 11:11:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.365 11:11:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.365 11:11:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.365 11:11:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.365 11:11:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:41.365 11:11:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:41.365 11:11:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.365 11:11:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.365 11:11:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.365 11:11:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.365 11:11:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.365 11:11:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.365 11:11:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.365 11:11:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.365 11:11:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.365 11:11:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.365 11:11:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.365 11:11:49 -- paths/export.sh@5 -- # export PATH 00:20:41.365 11:11:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.365 11:11:49 -- nvmf/common.sh@47 -- # : 0 00:20:41.365 11:11:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.365 11:11:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.365 11:11:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.365 11:11:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.365 11:11:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.365 11:11:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.365 11:11:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.365 11:11:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.365 11:11:49 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:41.365 11:11:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:41.365 11:11:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.365 11:11:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:41.365 11:11:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:41.365 11:11:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:41.365 11:11:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.365 11:11:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.365 11:11:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.365 11:11:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:41.365 11:11:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:41.365 11:11:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.365 11:11:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.365 11:11:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:41.365 11:11:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:41.365 11:11:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.624 11:11:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.624 11:11:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.624 11:11:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.624 11:11:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.624 11:11:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.624 11:11:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.624 11:11:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.624 11:11:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:41.624 11:11:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:41.624 Cannot find device "nvmf_tgt_br" 00:20:41.624 11:11:49 -- nvmf/common.sh@155 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.624 Cannot find device "nvmf_tgt_br2" 00:20:41.624 11:11:49 -- nvmf/common.sh@156 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:41.624 11:11:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:41.624 Cannot find device "nvmf_tgt_br" 00:20:41.624 11:11:49 -- nvmf/common.sh@158 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:41.624 Cannot find device "nvmf_tgt_br2" 00:20:41.624 11:11:49 -- nvmf/common.sh@159 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:41.624 11:11:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:41.624 11:11:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.624 11:11:49 -- nvmf/common.sh@162 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.624 11:11:49 -- nvmf/common.sh@163 -- # true 00:20:41.624 11:11:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.624 11:11:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.624 11:11:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.624 11:11:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.624 11:11:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.624 11:11:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.624 11:11:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.624 11:11:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:41.624 11:11:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:41.624 11:11:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:41.624 11:11:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:41.624 11:11:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:41.624 11:11:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:41.624 11:11:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.882 11:11:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.882 11:11:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.882 11:11:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:41.882 11:11:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:41.882 11:11:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.882 11:11:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.882 11:11:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.882 11:11:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.882 11:11:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.882 11:11:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:41.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:41.882 00:20:41.882 --- 10.0.0.2 ping statistics --- 00:20:41.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.882 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:41.882 11:11:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:41.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:41.882 00:20:41.882 --- 10.0.0.3 ping statistics --- 00:20:41.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.882 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:41.882 11:11:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:41.882 00:20:41.882 --- 10.0.0.1 ping statistics --- 00:20:41.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.882 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:41.882 11:11:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.882 11:11:49 -- nvmf/common.sh@422 -- # return 0 00:20:41.882 11:11:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:41.882 11:11:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.882 11:11:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:41.882 11:11:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:41.882 11:11:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.882 11:11:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:41.882 11:11:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:41.882 11:11:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:41.882 11:11:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:41.883 11:11:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:41.883 11:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.883 11:11:49 -- nvmf/common.sh@470 -- # nvmfpid=73261 00:20:41.883 11:11:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:41.883 11:11:49 -- nvmf/common.sh@471 -- # waitforlisten 73261 00:20:41.883 11:11:49 -- common/autotest_common.sh@817 -- # '[' -z 73261 ']' 00:20:41.883 11:11:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.883 11:11:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.883 11:11:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.883 11:11:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.883 11:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.883 [2024-04-18 11:11:50.090092] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:41.883 [2024-04-18 11:11:50.090285] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.141 [2024-04-18 11:11:50.272139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:42.398 [2024-04-18 11:11:50.602221] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.398 [2024-04-18 11:11:50.602281] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.398 [2024-04-18 11:11:50.602303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.398 [2024-04-18 11:11:50.602330] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.398 [2024-04-18 11:11:50.602346] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.398 [2024-04-18 11:11:50.602566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.398 [2024-04-18 11:11:50.603616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.398 [2024-04-18 11:11:50.603636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.963 11:11:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.963 11:11:51 -- common/autotest_common.sh@850 -- # return 0 00:20:42.963 11:11:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:42.963 11:11:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:42.963 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.963 11:11:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.963 11:11:51 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:43.221 [2024-04-18 11:11:51.319532] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.221 11:11:51 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.807 11:11:51 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:43.807 11:11:51 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.065 11:11:52 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:44.065 11:11:52 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:44.324 11:11:52 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:44.582 11:11:52 -- target/nvmf_lvol.sh@29 -- # lvs=157ed73f-6b1e-442f-8dd8-c68b209970ac 00:20:44.582 11:11:52 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 157ed73f-6b1e-442f-8dd8-c68b209970ac lvol 20 00:20:44.839 11:11:52 -- target/nvmf_lvol.sh@32 -- # lvol=cb5c5914-1d7d-4891-8655-0c1cee09a672 00:20:44.839 11:11:52 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:45.105 11:11:53 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb5c5914-1d7d-4891-8655-0c1cee09a672 00:20:45.362 11:11:53 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:45.362 [2024-04-18 11:11:53.567636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.621 11:11:53 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:45.621 11:11:53 -- target/nvmf_lvol.sh@42 -- # perf_pid=73413 00:20:45.621 11:11:53 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:45.621 11:11:53 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:46.997 11:11:54 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot cb5c5914-1d7d-4891-8655-0c1cee09a672 MY_SNAPSHOT 00:20:47.322 11:11:55 -- target/nvmf_lvol.sh@47 -- # snapshot=97310477-bcd4-4d9e-8d77-15579847eecc 00:20:47.322 11:11:55 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize cb5c5914-1d7d-4891-8655-0c1cee09a672 30 00:20:47.582 11:11:55 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 97310477-bcd4-4d9e-8d77-15579847eecc MY_CLONE 00:20:47.842 11:11:55 -- target/nvmf_lvol.sh@49 -- # clone=2c66a97d-8c0d-4ecb-9d5a-9b408001858c 00:20:47.842 11:11:55 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2c66a97d-8c0d-4ecb-9d5a-9b408001858c 00:20:48.775 11:11:56 -- target/nvmf_lvol.sh@53 -- # wait 73413 00:20:56.880 Initializing NVMe Controllers 00:20:56.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:56.880 Controller IO queue size 128, less than required. 00:20:56.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:56.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:56.880 Initialization complete. Launching workers. 00:20:56.880 ======================================================== 00:20:56.880 Latency(us) 00:20:56.880 Device Information : IOPS MiB/s Average min max 00:20:56.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7225.40 28.22 17731.80 360.32 235469.80 00:20:56.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6939.30 27.11 18456.35 4913.44 182178.68 00:20:56.880 ======================================================== 00:20:56.880 Total : 14164.70 55.33 18086.76 360.32 235469.80 00:20:56.881 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cb5c5914-1d7d-4891-8655-0c1cee09a672 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 157ed73f-6b1e-442f-8dd8-c68b209970ac 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:56.881 11:12:04 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:56.881 11:12:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:56.881 11:12:04 -- nvmf/common.sh@117 -- # sync 00:20:56.881 11:12:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.881 11:12:05 -- nvmf/common.sh@120 -- # set +e 00:20:56.881 11:12:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.881 11:12:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.881 rmmod nvme_tcp 00:20:56.881 rmmod nvme_fabrics 00:20:56.881 rmmod nvme_keyring 00:20:56.881 11:12:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.881 11:12:05 -- nvmf/common.sh@124 -- # set -e 00:20:56.881 11:12:05 -- nvmf/common.sh@125 -- # return 0 00:20:56.881 11:12:05 -- nvmf/common.sh@478 -- # '[' -n 73261 ']' 00:20:56.881 11:12:05 -- nvmf/common.sh@479 -- # killprocess 73261 00:20:56.881 11:12:05 -- common/autotest_common.sh@936 -- # '[' -z 73261 ']' 00:20:56.881 11:12:05 -- common/autotest_common.sh@940 -- # kill -0 73261 00:20:56.881 11:12:05 -- common/autotest_common.sh@941 -- # uname 00:20:56.881 11:12:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.881 11:12:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73261 00:20:56.881 killing process with pid 73261 00:20:56.881 11:12:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:56.881 11:12:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:56.881 11:12:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73261' 00:20:56.881 11:12:05 -- common/autotest_common.sh@955 -- # kill 73261 00:20:56.881 11:12:05 -- common/autotest_common.sh@960 -- # wait 73261 00:20:58.783 11:12:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:58.783 11:12:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:58.783 11:12:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.783 11:12:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.783 11:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.783 11:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.783 11:12:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:58.783 00:20:58.783 real 0m17.232s 00:20:58.783 user 1m8.943s 00:20:58.783 sys 0m3.681s 00:20:58.783 11:12:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:58.783 11:12:06 -- common/autotest_common.sh@10 -- # set +x 00:20:58.783 ************************************ 00:20:58.783 END TEST nvmf_lvol 00:20:58.783 ************************************ 00:20:58.783 11:12:06 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:58.783 11:12:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.783 11:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.783 11:12:06 -- common/autotest_common.sh@10 -- # set +x 00:20:58.783 ************************************ 00:20:58.783 START TEST nvmf_lvs_grow 00:20:58.783 ************************************ 00:20:58.783 11:12:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:58.783 * Looking for test storage... 00:20:58.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:58.783 11:12:06 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.783 11:12:06 -- nvmf/common.sh@7 -- # uname -s 00:20:58.783 11:12:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.783 11:12:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.783 11:12:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.783 11:12:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.783 11:12:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.783 11:12:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.783 11:12:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.783 11:12:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.783 11:12:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.783 11:12:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:58.783 11:12:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:20:58.783 11:12:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.783 11:12:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.783 11:12:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.783 11:12:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.783 11:12:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.783 11:12:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.783 11:12:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.783 11:12:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.783 11:12:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.783 11:12:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.783 11:12:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.783 11:12:06 -- paths/export.sh@5 -- # export PATH 00:20:58.783 11:12:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.783 11:12:06 -- nvmf/common.sh@47 -- # : 0 00:20:58.783 11:12:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.783 11:12:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.783 11:12:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.783 11:12:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.783 11:12:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.783 11:12:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.783 11:12:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.783 11:12:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.783 11:12:06 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.783 11:12:06 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.783 11:12:06 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:20:58.783 11:12:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:58.783 11:12:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.783 11:12:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:58.783 11:12:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:58.783 11:12:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:58.783 11:12:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.783 11:12:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.783 11:12:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.783 11:12:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:58.783 11:12:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:58.783 11:12:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.783 11:12:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.783 11:12:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.783 11:12:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:58.783 11:12:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.783 11:12:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.783 11:12:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.783 11:12:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.783 11:12:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.783 11:12:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.783 11:12:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.783 11:12:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.783 11:12:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:58.783 11:12:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:58.783 Cannot find device "nvmf_tgt_br" 00:20:58.783 11:12:06 -- nvmf/common.sh@155 -- # true 00:20:58.783 11:12:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.783 Cannot find device "nvmf_tgt_br2" 00:20:58.783 11:12:06 -- nvmf/common.sh@156 -- # true 00:20:58.783 11:12:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:58.783 11:12:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:58.783 Cannot find device "nvmf_tgt_br" 00:20:58.783 11:12:06 -- nvmf/common.sh@158 -- # true 00:20:58.783 11:12:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:58.783 Cannot find device "nvmf_tgt_br2" 00:20:58.783 11:12:06 -- nvmf/common.sh@159 -- # true 00:20:58.783 11:12:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:59.041 11:12:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:59.041 11:12:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.041 11:12:07 -- nvmf/common.sh@162 -- # true 00:20:59.041 11:12:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.041 11:12:07 -- nvmf/common.sh@163 -- # true 00:20:59.041 11:12:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.041 11:12:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.041 11:12:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.041 11:12:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.041 11:12:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.041 11:12:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.041 11:12:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.041 11:12:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:59.041 11:12:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:59.041 11:12:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:59.041 11:12:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:59.041 11:12:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:59.041 11:12:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:59.041 11:12:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.041 11:12:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.041 11:12:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.041 11:12:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:59.041 11:12:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:59.041 11:12:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.041 11:12:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.041 11:12:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.041 11:12:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.041 11:12:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.299 11:12:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:59.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:20:59.299 00:20:59.299 --- 10.0.0.2 ping statistics --- 00:20:59.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.299 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:59.299 11:12:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:59.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:59.299 00:20:59.299 --- 10.0.0.3 ping statistics --- 00:20:59.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.299 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:59.299 11:12:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:59.299 00:20:59.299 --- 10.0.0.1 ping statistics --- 00:20:59.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.299 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:59.299 11:12:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.299 11:12:07 -- nvmf/common.sh@422 -- # return 0 00:20:59.299 11:12:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:59.299 11:12:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.299 11:12:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:59.299 11:12:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:59.299 11:12:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.299 11:12:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:59.299 11:12:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:59.299 11:12:07 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:20:59.299 11:12:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:59.299 11:12:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:59.299 11:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:59.299 11:12:07 -- nvmf/common.sh@470 -- # nvmfpid=73793 00:20:59.299 11:12:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:59.299 11:12:07 -- nvmf/common.sh@471 -- # waitforlisten 73793 00:20:59.299 11:12:07 -- common/autotest_common.sh@817 -- # '[' -z 73793 ']' 00:20:59.299 11:12:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.299 11:12:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:59.299 11:12:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.299 11:12:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:59.299 11:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:59.299 [2024-04-18 11:12:07.412504] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:59.299 [2024-04-18 11:12:07.412683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.556 [2024-04-18 11:12:07.595370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.814 [2024-04-18 11:12:07.935040] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.814 [2024-04-18 11:12:07.935116] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.814 [2024-04-18 11:12:07.935139] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.814 [2024-04-18 11:12:07.935166] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.814 [2024-04-18 11:12:07.935183] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.814 [2024-04-18 11:12:07.935228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.380 11:12:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.380 11:12:08 -- common/autotest_common.sh@850 -- # return 0 00:21:00.380 11:12:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:00.380 11:12:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.380 11:12:08 -- common/autotest_common.sh@10 -- # set +x 00:21:00.380 11:12:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.380 11:12:08 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.654 [2024-04-18 11:12:08.723327] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:21:00.654 11:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:00.654 11:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:00.654 11:12:08 -- common/autotest_common.sh@10 -- # set +x 00:21:00.654 ************************************ 00:21:00.654 START TEST lvs_grow_clean 00:21:00.654 ************************************ 00:21:00.654 11:12:08 -- common/autotest_common.sh@1111 -- # lvs_grow 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:00.654 11:12:08 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:01.218 11:12:09 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:01.218 11:12:09 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:01.218 11:12:09 -- target/nvmf_lvs_grow.sh@28 -- # lvs=dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:01.218 11:12:09 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:01.218 11:12:09 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 lvol 150 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3a5e807c-3225-4e15-b578-c60aeea7f49e 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:01.780 11:12:09 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:02.036 [2024-04-18 11:12:10.210612] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:02.036 [2024-04-18 11:12:10.210751] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:02.036 true 00:21:02.036 11:12:10 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:02.036 11:12:10 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:02.293 11:12:10 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:02.293 11:12:10 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:02.564 11:12:10 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a5e807c-3225-4e15-b578-c60aeea7f49e 00:21:02.822 11:12:11 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.080 [2024-04-18 11:12:11.215452] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.080 11:12:11 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:03.336 11:12:11 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73961 00:21:03.336 11:12:11 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:03.337 11:12:11 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.337 11:12:11 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73961 /var/tmp/bdevperf.sock 00:21:03.337 11:12:11 -- common/autotest_common.sh@817 -- # '[' -z 73961 ']' 00:21:03.337 11:12:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.337 11:12:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.337 11:12:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.337 11:12:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.337 11:12:11 -- common/autotest_common.sh@10 -- # set +x 00:21:03.337 [2024-04-18 11:12:11.551134] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:03.337 [2024-04-18 11:12:11.551288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73961 ] 00:21:03.594 [2024-04-18 11:12:11.714693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.852 [2024-04-18 11:12:12.026175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.416 11:12:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:04.416 11:12:12 -- common/autotest_common.sh@850 -- # return 0 00:21:04.416 11:12:12 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:04.672 Nvme0n1 00:21:04.672 11:12:12 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:04.930 [ 00:21:04.930 { 00:21:04.930 "aliases": [ 00:21:04.930 "3a5e807c-3225-4e15-b578-c60aeea7f49e" 00:21:04.930 ], 00:21:04.930 "assigned_rate_limits": { 00:21:04.930 "r_mbytes_per_sec": 0, 00:21:04.930 "rw_ios_per_sec": 0, 00:21:04.930 "rw_mbytes_per_sec": 0, 00:21:04.930 "w_mbytes_per_sec": 0 00:21:04.930 }, 00:21:04.930 "block_size": 4096, 00:21:04.930 "claimed": false, 00:21:04.930 "driver_specific": { 00:21:04.930 "mp_policy": "active_passive", 00:21:04.930 "nvme": [ 00:21:04.930 { 00:21:04.930 "ctrlr_data": { 00:21:04.930 "ana_reporting": false, 00:21:04.930 "cntlid": 1, 00:21:04.930 "firmware_revision": "24.05", 00:21:04.930 "model_number": "SPDK bdev Controller", 00:21:04.930 "multi_ctrlr": true, 00:21:04.930 "oacs": { 00:21:04.930 "firmware": 0, 00:21:04.930 "format": 0, 00:21:04.930 "ns_manage": 0, 00:21:04.930 "security": 0 00:21:04.930 }, 00:21:04.930 "serial_number": "SPDK0", 00:21:04.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.930 "vendor_id": "0x8086" 00:21:04.930 }, 00:21:04.930 "ns_data": { 00:21:04.930 "can_share": true, 00:21:04.930 "id": 1 00:21:04.930 }, 00:21:04.930 "trid": { 00:21:04.930 "adrfam": "IPv4", 00:21:04.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.930 "traddr": "10.0.0.2", 00:21:04.930 "trsvcid": "4420", 00:21:04.930 "trtype": "TCP" 00:21:04.930 }, 00:21:04.930 "vs": { 00:21:04.930 "nvme_version": "1.3" 00:21:04.930 } 00:21:04.930 } 00:21:04.930 ] 00:21:04.930 }, 00:21:04.930 "memory_domains": [ 00:21:04.930 { 00:21:04.930 "dma_device_id": "system", 00:21:04.930 "dma_device_type": 1 00:21:04.930 } 00:21:04.930 ], 00:21:04.930 "name": "Nvme0n1", 00:21:04.930 "num_blocks": 38912, 00:21:04.930 "product_name": "NVMe disk", 00:21:04.930 "supported_io_types": { 00:21:04.930 "abort": true, 00:21:04.930 "compare": true, 00:21:04.930 "compare_and_write": true, 00:21:04.930 "flush": true, 00:21:04.930 "nvme_admin": true, 00:21:04.930 "nvme_io": true, 00:21:04.930 "read": true, 00:21:04.930 "reset": true, 00:21:04.930 "unmap": true, 00:21:04.930 "write": true, 00:21:04.930 "write_zeroes": true 00:21:04.930 }, 00:21:04.930 "uuid": "3a5e807c-3225-4e15-b578-c60aeea7f49e", 00:21:04.930 "zoned": false 00:21:04.930 } 00:21:04.930 ] 00:21:04.930 11:12:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74013 00:21:04.930 11:12:13 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.930 11:12:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:05.187 Running I/O for 10 seconds... 00:21:06.121 Latency(us) 00:21:06.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.121 Nvme0n1 : 1.00 6361.00 24.85 0.00 0.00 0.00 0.00 0.00 00:21:06.121 =================================================================================================================== 00:21:06.121 Total : 6361.00 24.85 0.00 0.00 0.00 0.00 0.00 00:21:06.121 00:21:07.088 11:12:15 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:07.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.088 Nvme0n1 : 2.00 6403.00 25.01 0.00 0.00 0.00 0.00 0.00 00:21:07.088 =================================================================================================================== 00:21:07.088 Total : 6403.00 25.01 0.00 0.00 0.00 0.00 0.00 00:21:07.088 00:21:07.352 true 00:21:07.352 11:12:15 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:07.352 11:12:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:07.611 11:12:15 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:07.611 11:12:15 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:07.611 11:12:15 -- target/nvmf_lvs_grow.sh@65 -- # wait 74013 00:21:08.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:08.231 Nvme0n1 : 3.00 6407.33 25.03 0.00 0.00 0.00 0.00 0.00 00:21:08.231 =================================================================================================================== 00:21:08.231 Total : 6407.33 25.03 0.00 0.00 0.00 0.00 0.00 00:21:08.231 00:21:09.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:09.165 Nvme0n1 : 4.00 6443.50 25.17 0.00 0.00 0.00 0.00 0.00 00:21:09.166 =================================================================================================================== 00:21:09.166 Total : 6443.50 25.17 0.00 0.00 0.00 0.00 0.00 00:21:09.166 00:21:10.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.100 Nvme0n1 : 5.00 6466.40 25.26 0.00 0.00 0.00 0.00 0.00 00:21:10.100 =================================================================================================================== 00:21:10.100 Total : 6466.40 25.26 0.00 0.00 0.00 0.00 0.00 00:21:10.100 00:21:11.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:11.035 Nvme0n1 : 6.00 6462.17 25.24 0.00 0.00 0.00 0.00 0.00 00:21:11.035 =================================================================================================================== 00:21:11.035 Total : 6462.17 25.24 0.00 0.00 0.00 0.00 0.00 00:21:11.035 00:21:11.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:11.968 Nvme0n1 : 7.00 6427.86 25.11 0.00 0.00 0.00 0.00 0.00 00:21:11.968 =================================================================================================================== 00:21:11.968 Total : 6427.86 25.11 0.00 0.00 0.00 0.00 0.00 00:21:11.968 00:21:13.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:13.341 Nvme0n1 : 8.00 6415.62 25.06 0.00 0.00 0.00 0.00 0.00 00:21:13.341 =================================================================================================================== 00:21:13.341 Total : 6415.62 25.06 0.00 0.00 0.00 0.00 0.00 00:21:13.341 00:21:14.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:14.276 Nvme0n1 : 9.00 6388.56 24.96 0.00 0.00 0.00 0.00 0.00 00:21:14.276 =================================================================================================================== 00:21:14.276 Total : 6388.56 24.96 0.00 0.00 0.00 0.00 0.00 00:21:14.276 00:21:15.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.213 Nvme0n1 : 10.00 6376.40 24.91 0.00 0.00 0.00 0.00 0.00 00:21:15.213 =================================================================================================================== 00:21:15.213 Total : 6376.40 24.91 0.00 0.00 0.00 0.00 0.00 00:21:15.213 00:21:15.213 00:21:15.213 Latency(us) 00:21:15.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.213 Nvme0n1 : 10.02 6379.12 24.92 0.00 0.00 20058.31 8757.99 43849.54 00:21:15.213 =================================================================================================================== 00:21:15.213 Total : 6379.12 24.92 0.00 0.00 20058.31 8757.99 43849.54 00:21:15.213 0 00:21:15.213 11:12:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73961 00:21:15.213 11:12:23 -- common/autotest_common.sh@936 -- # '[' -z 73961 ']' 00:21:15.213 11:12:23 -- common/autotest_common.sh@940 -- # kill -0 73961 00:21:15.213 11:12:23 -- common/autotest_common.sh@941 -- # uname 00:21:15.213 11:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:15.213 11:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73961 00:21:15.213 11:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:15.213 11:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:15.213 11:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73961' 00:21:15.213 killing process with pid 73961 00:21:15.213 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.213 00:21:15.213 Latency(us) 00:21:15.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.213 =================================================================================================================== 00:21:15.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.213 11:12:23 -- common/autotest_common.sh@955 -- # kill 73961 00:21:15.213 11:12:23 -- common/autotest_common.sh@960 -- # wait 73961 00:21:16.589 11:12:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:16.589 11:12:24 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:16.589 11:12:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:16.847 11:12:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:16.847 11:12:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:21:16.847 11:12:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:17.106 [2024-04-18 11:12:25.275293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:17.106 11:12:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:17.106 11:12:25 -- common/autotest_common.sh@638 -- # local es=0 00:21:17.106 11:12:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:17.106 11:12:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.106 11:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.106 11:12:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.106 11:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.106 11:12:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.106 11:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.106 11:12:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.106 11:12:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:17.106 11:12:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:17.672 2024/04/18 11:12:25 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dcb0a55f-3e79-47d9-9f57-bd8e7aefb055], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:21:17.672 request: 00:21:17.672 { 00:21:17.672 "method": "bdev_lvol_get_lvstores", 00:21:17.672 "params": { 00:21:17.672 "uuid": "dcb0a55f-3e79-47d9-9f57-bd8e7aefb055" 00:21:17.672 } 00:21:17.672 } 00:21:17.672 Got JSON-RPC error response 00:21:17.672 GoRPCClient: error on JSON-RPC call 00:21:17.672 11:12:25 -- common/autotest_common.sh@641 -- # es=1 00:21:17.672 11:12:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:17.672 11:12:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:17.672 11:12:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:17.672 11:12:25 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:17.672 aio_bdev 00:21:17.672 11:12:25 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3a5e807c-3225-4e15-b578-c60aeea7f49e 00:21:17.672 11:12:25 -- common/autotest_common.sh@885 -- # local bdev_name=3a5e807c-3225-4e15-b578-c60aeea7f49e 00:21:17.672 11:12:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:17.672 11:12:25 -- common/autotest_common.sh@887 -- # local i 00:21:17.672 11:12:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:17.672 11:12:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:17.672 11:12:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:17.930 11:12:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a5e807c-3225-4e15-b578-c60aeea7f49e -t 2000 00:21:18.188 [ 00:21:18.188 { 00:21:18.188 "aliases": [ 00:21:18.188 "lvs/lvol" 00:21:18.188 ], 00:21:18.188 "assigned_rate_limits": { 00:21:18.188 "r_mbytes_per_sec": 0, 00:21:18.189 "rw_ios_per_sec": 0, 00:21:18.189 "rw_mbytes_per_sec": 0, 00:21:18.189 "w_mbytes_per_sec": 0 00:21:18.189 }, 00:21:18.189 "block_size": 4096, 00:21:18.189 "claimed": false, 00:21:18.189 "driver_specific": { 00:21:18.189 "lvol": { 00:21:18.189 "base_bdev": "aio_bdev", 00:21:18.189 "clone": false, 00:21:18.189 "esnap_clone": false, 00:21:18.189 "lvol_store_uuid": "dcb0a55f-3e79-47d9-9f57-bd8e7aefb055", 00:21:18.189 "snapshot": false, 00:21:18.189 "thin_provision": false 00:21:18.189 } 00:21:18.189 }, 00:21:18.189 "name": "3a5e807c-3225-4e15-b578-c60aeea7f49e", 00:21:18.189 "num_blocks": 38912, 00:21:18.189 "product_name": "Logical Volume", 00:21:18.189 "supported_io_types": { 00:21:18.189 "abort": false, 00:21:18.189 "compare": false, 00:21:18.189 "compare_and_write": false, 00:21:18.189 "flush": false, 00:21:18.189 "nvme_admin": false, 00:21:18.189 "nvme_io": false, 00:21:18.189 "read": true, 00:21:18.189 "reset": true, 00:21:18.189 "unmap": true, 00:21:18.189 "write": true, 00:21:18.189 "write_zeroes": true 00:21:18.189 }, 00:21:18.189 "uuid": "3a5e807c-3225-4e15-b578-c60aeea7f49e", 00:21:18.189 "zoned": false 00:21:18.189 } 00:21:18.189 ] 00:21:18.189 11:12:26 -- common/autotest_common.sh@893 -- # return 0 00:21:18.447 11:12:26 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:18.447 11:12:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:18.705 11:12:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:18.705 11:12:26 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:18.705 11:12:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:18.996 11:12:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:18.996 11:12:26 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3a5e807c-3225-4e15-b578-c60aeea7f49e 00:21:19.254 11:12:27 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dcb0a55f-3e79-47d9-9f57-bd8e7aefb055 00:21:19.513 11:12:27 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:19.771 11:12:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:20.029 ************************************ 00:21:20.029 END TEST lvs_grow_clean 00:21:20.029 ************************************ 00:21:20.029 00:21:20.029 real 0m19.290s 00:21:20.029 user 0m18.628s 00:21:20.029 sys 0m2.183s 00:21:20.029 11:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.029 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:20.029 11:12:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:20.029 11:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:20.029 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:20.029 ************************************ 00:21:20.029 START TEST lvs_grow_dirty 00:21:20.029 ************************************ 00:21:20.029 11:12:28 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:20.029 11:12:28 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:20.597 11:12:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:20.597 11:12:28 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:20.856 11:12:28 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:20.856 11:12:28 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:20.856 11:12:28 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:21.115 11:12:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:21.115 11:12:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:21.115 11:12:29 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3bb8c6ef-d892-4471-9d75-40bec4496207 lvol 150 00:21:21.115 11:12:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:21.115 11:12:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:21.373 11:12:29 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:21.632 [2024-04-18 11:12:29.599486] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:21.632 [2024-04-18 11:12:29.599648] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:21.632 true 00:21:21.632 11:12:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:21.632 11:12:29 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:21.890 11:12:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:21.890 11:12:29 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:22.148 11:12:30 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:22.406 11:12:30 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:22.666 11:12:30 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:22.925 11:12:30 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:22.925 11:12:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74413 00:21:22.925 11:12:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.925 11:12:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74413 /var/tmp/bdevperf.sock 00:21:22.925 11:12:30 -- common/autotest_common.sh@817 -- # '[' -z 74413 ']' 00:21:22.925 11:12:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.925 11:12:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.925 11:12:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.925 11:12:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.925 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:22.925 [2024-04-18 11:12:31.011198] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:22.925 [2024-04-18 11:12:31.011406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74413 ] 00:21:23.184 [2024-04-18 11:12:31.178950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.442 [2024-04-18 11:12:31.450530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.035 11:12:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:24.035 11:12:31 -- common/autotest_common.sh@850 -- # return 0 00:21:24.035 11:12:31 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:24.035 Nvme0n1 00:21:24.294 11:12:32 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:24.294 [ 00:21:24.294 { 00:21:24.294 "aliases": [ 00:21:24.294 "83921c9e-dd57-40b0-a1c0-d32257b74dcf" 00:21:24.294 ], 00:21:24.294 "assigned_rate_limits": { 00:21:24.294 "r_mbytes_per_sec": 0, 00:21:24.294 "rw_ios_per_sec": 0, 00:21:24.294 "rw_mbytes_per_sec": 0, 00:21:24.294 "w_mbytes_per_sec": 0 00:21:24.294 }, 00:21:24.294 "block_size": 4096, 00:21:24.294 "claimed": false, 00:21:24.294 "driver_specific": { 00:21:24.294 "mp_policy": "active_passive", 00:21:24.294 "nvme": [ 00:21:24.294 { 00:21:24.294 "ctrlr_data": { 00:21:24.294 "ana_reporting": false, 00:21:24.294 "cntlid": 1, 00:21:24.294 "firmware_revision": "24.05", 00:21:24.294 "model_number": "SPDK bdev Controller", 00:21:24.294 "multi_ctrlr": true, 00:21:24.294 "oacs": { 00:21:24.294 "firmware": 0, 00:21:24.294 "format": 0, 00:21:24.294 "ns_manage": 0, 00:21:24.294 "security": 0 00:21:24.294 }, 00:21:24.294 "serial_number": "SPDK0", 00:21:24.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.294 "vendor_id": "0x8086" 00:21:24.294 }, 00:21:24.294 "ns_data": { 00:21:24.294 "can_share": true, 00:21:24.294 "id": 1 00:21:24.294 }, 00:21:24.294 "trid": { 00:21:24.294 "adrfam": "IPv4", 00:21:24.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.294 "traddr": "10.0.0.2", 00:21:24.294 "trsvcid": "4420", 00:21:24.294 "trtype": "TCP" 00:21:24.295 }, 00:21:24.295 "vs": { 00:21:24.295 "nvme_version": "1.3" 00:21:24.295 } 00:21:24.295 } 00:21:24.295 ] 00:21:24.295 }, 00:21:24.295 "memory_domains": [ 00:21:24.295 { 00:21:24.295 "dma_device_id": "system", 00:21:24.295 "dma_device_type": 1 00:21:24.295 } 00:21:24.295 ], 00:21:24.295 "name": "Nvme0n1", 00:21:24.295 "num_blocks": 38912, 00:21:24.295 "product_name": "NVMe disk", 00:21:24.295 "supported_io_types": { 00:21:24.295 "abort": true, 00:21:24.295 "compare": true, 00:21:24.295 "compare_and_write": true, 00:21:24.295 "flush": true, 00:21:24.295 "nvme_admin": true, 00:21:24.295 "nvme_io": true, 00:21:24.295 "read": true, 00:21:24.295 "reset": true, 00:21:24.295 "unmap": true, 00:21:24.295 "write": true, 00:21:24.295 "write_zeroes": true 00:21:24.295 }, 00:21:24.295 "uuid": "83921c9e-dd57-40b0-a1c0-d32257b74dcf", 00:21:24.295 "zoned": false 00:21:24.295 } 00:21:24.295 ] 00:21:24.295 11:12:32 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74462 00:21:24.295 11:12:32 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.295 11:12:32 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:24.553 Running I/O for 10 seconds... 00:21:25.487 Latency(us) 00:21:25.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:25.487 Nvme0n1 : 1.00 5908.00 23.08 0.00 0.00 0.00 0.00 0.00 00:21:25.487 =================================================================================================================== 00:21:25.487 Total : 5908.00 23.08 0.00 0.00 0.00 0.00 0.00 00:21:25.487 00:21:26.422 11:12:34 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:26.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:26.422 Nvme0n1 : 2.00 6042.00 23.60 0.00 0.00 0.00 0.00 0.00 00:21:26.422 =================================================================================================================== 00:21:26.422 Total : 6042.00 23.60 0.00 0.00 0.00 0.00 0.00 00:21:26.422 00:21:26.680 true 00:21:26.680 11:12:34 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:26.680 11:12:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:26.939 11:12:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:26.939 11:12:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:26.939 11:12:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 74462 00:21:27.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:27.505 Nvme0n1 : 3.00 6192.00 24.19 0.00 0.00 0.00 0.00 0.00 00:21:27.505 =================================================================================================================== 00:21:27.505 Total : 6192.00 24.19 0.00 0.00 0.00 0.00 0.00 00:21:27.505 00:21:28.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.440 Nvme0n1 : 4.00 6183.75 24.16 0.00 0.00 0.00 0.00 0.00 00:21:28.440 =================================================================================================================== 00:21:28.440 Total : 6183.75 24.16 0.00 0.00 0.00 0.00 0.00 00:21:28.440 00:21:29.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.815 Nvme0n1 : 5.00 6207.80 24.25 0.00 0.00 0.00 0.00 0.00 00:21:29.815 =================================================================================================================== 00:21:29.815 Total : 6207.80 24.25 0.00 0.00 0.00 0.00 0.00 00:21:29.815 00:21:30.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.750 Nvme0n1 : 6.00 6166.00 24.09 0.00 0.00 0.00 0.00 0.00 00:21:30.750 =================================================================================================================== 00:21:30.750 Total : 6166.00 24.09 0.00 0.00 0.00 0.00 0.00 00:21:30.750 00:21:31.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:31.685 Nvme0n1 : 7.00 6187.57 24.17 0.00 0.00 0.00 0.00 0.00 00:21:31.685 =================================================================================================================== 00:21:31.685 Total : 6187.57 24.17 0.00 0.00 0.00 0.00 0.00 00:21:31.685 00:21:32.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:32.620 Nvme0n1 : 8.00 6197.38 24.21 0.00 0.00 0.00 0.00 0.00 00:21:32.620 =================================================================================================================== 00:21:32.620 Total : 6197.38 24.21 0.00 0.00 0.00 0.00 0.00 00:21:32.620 00:21:33.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:33.553 Nvme0n1 : 9.00 6205.56 24.24 0.00 0.00 0.00 0.00 0.00 00:21:33.553 =================================================================================================================== 00:21:33.553 Total : 6205.56 24.24 0.00 0.00 0.00 0.00 0.00 00:21:33.553 00:21:34.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.488 Nvme0n1 : 10.00 6226.70 24.32 0.00 0.00 0.00 0.00 0.00 00:21:34.488 =================================================================================================================== 00:21:34.488 Total : 6226.70 24.32 0.00 0.00 0.00 0.00 0.00 00:21:34.488 00:21:34.488 00:21:34.488 Latency(us) 00:21:34.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.488 Nvme0n1 : 10.01 6234.93 24.36 0.00 0.00 20521.66 3872.58 68634.07 00:21:34.488 =================================================================================================================== 00:21:34.488 Total : 6234.93 24.36 0.00 0.00 20521.66 3872.58 68634.07 00:21:34.488 0 00:21:34.488 11:12:42 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74413 00:21:34.488 11:12:42 -- common/autotest_common.sh@936 -- # '[' -z 74413 ']' 00:21:34.488 11:12:42 -- common/autotest_common.sh@940 -- # kill -0 74413 00:21:34.488 11:12:42 -- common/autotest_common.sh@941 -- # uname 00:21:34.488 11:12:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.488 11:12:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74413 00:21:34.488 killing process with pid 74413 00:21:34.488 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.488 00:21:34.488 Latency(us) 00:21:34.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.488 =================================================================================================================== 00:21:34.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.488 11:12:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:34.488 11:12:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:34.488 11:12:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74413' 00:21:34.488 11:12:42 -- common/autotest_common.sh@955 -- # kill 74413 00:21:34.488 11:12:42 -- common/autotest_common.sh@960 -- # wait 74413 00:21:35.865 11:12:43 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.123 11:12:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:36.123 11:12:44 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:36.381 11:12:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:36.381 11:12:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:21:36.381 11:12:44 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73793 00:21:36.381 11:12:44 -- target/nvmf_lvs_grow.sh@74 -- # wait 73793 00:21:36.381 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73793 Killed "${NVMF_APP[@]}" "$@" 00:21:36.381 11:12:44 -- target/nvmf_lvs_grow.sh@74 -- # true 00:21:36.382 11:12:44 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:21:36.382 11:12:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:36.382 11:12:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.382 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.382 11:12:44 -- nvmf/common.sh@470 -- # nvmfpid=74624 00:21:36.382 11:12:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:36.382 11:12:44 -- nvmf/common.sh@471 -- # waitforlisten 74624 00:21:36.382 11:12:44 -- common/autotest_common.sh@817 -- # '[' -z 74624 ']' 00:21:36.382 11:12:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.382 11:12:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.382 11:12:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.382 11:12:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.382 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.639 [2024-04-18 11:12:44.625252] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:36.639 [2024-04-18 11:12:44.625426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.639 [2024-04-18 11:12:44.804025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.897 [2024-04-18 11:12:45.076641] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.897 [2024-04-18 11:12:45.076720] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.897 [2024-04-18 11:12:45.076741] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.897 [2024-04-18 11:12:45.076768] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.897 [2024-04-18 11:12:45.076784] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.897 [2024-04-18 11:12:45.076831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.463 11:12:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.463 11:12:45 -- common/autotest_common.sh@850 -- # return 0 00:21:37.463 11:12:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:37.463 11:12:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.463 11:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:37.463 11:12:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.463 11:12:45 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:37.720 [2024-04-18 11:12:45.865209] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:37.720 [2024-04-18 11:12:45.865709] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:37.720 [2024-04-18 11:12:45.865882] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:37.720 11:12:45 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:21:37.720 11:12:45 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:37.720 11:12:45 -- common/autotest_common.sh@885 -- # local bdev_name=83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:37.720 11:12:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:37.720 11:12:45 -- common/autotest_common.sh@887 -- # local i 00:21:37.720 11:12:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:37.720 11:12:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:37.720 11:12:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:38.286 11:12:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83921c9e-dd57-40b0-a1c0-d32257b74dcf -t 2000 00:21:38.286 [ 00:21:38.286 { 00:21:38.286 "aliases": [ 00:21:38.286 "lvs/lvol" 00:21:38.286 ], 00:21:38.286 "assigned_rate_limits": { 00:21:38.286 "r_mbytes_per_sec": 0, 00:21:38.286 "rw_ios_per_sec": 0, 00:21:38.286 "rw_mbytes_per_sec": 0, 00:21:38.286 "w_mbytes_per_sec": 0 00:21:38.286 }, 00:21:38.286 "block_size": 4096, 00:21:38.286 "claimed": false, 00:21:38.286 "driver_specific": { 00:21:38.286 "lvol": { 00:21:38.286 "base_bdev": "aio_bdev", 00:21:38.286 "clone": false, 00:21:38.286 "esnap_clone": false, 00:21:38.286 "lvol_store_uuid": "3bb8c6ef-d892-4471-9d75-40bec4496207", 00:21:38.286 "snapshot": false, 00:21:38.286 "thin_provision": false 00:21:38.286 } 00:21:38.286 }, 00:21:38.286 "name": "83921c9e-dd57-40b0-a1c0-d32257b74dcf", 00:21:38.286 "num_blocks": 38912, 00:21:38.286 "product_name": "Logical Volume", 00:21:38.286 "supported_io_types": { 00:21:38.286 "abort": false, 00:21:38.286 "compare": false, 00:21:38.286 "compare_and_write": false, 00:21:38.286 "flush": false, 00:21:38.286 "nvme_admin": false, 00:21:38.286 "nvme_io": false, 00:21:38.286 "read": true, 00:21:38.286 "reset": true, 00:21:38.286 "unmap": true, 00:21:38.286 "write": true, 00:21:38.286 "write_zeroes": true 00:21:38.286 }, 00:21:38.286 "uuid": "83921c9e-dd57-40b0-a1c0-d32257b74dcf", 00:21:38.286 "zoned": false 00:21:38.286 } 00:21:38.286 ] 00:21:38.286 11:12:46 -- common/autotest_common.sh@893 -- # return 0 00:21:38.286 11:12:46 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:38.286 11:12:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:21:38.544 11:12:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:21:38.544 11:12:46 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:38.544 11:12:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:21:38.802 11:12:46 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:21:38.802 11:12:46 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:39.393 [2024-04-18 11:12:47.286511] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:39.393 11:12:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:39.393 11:12:47 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.393 11:12:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:39.393 11:12:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.393 11:12:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.393 11:12:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.393 11:12:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.393 11:12:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.393 11:12:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.393 11:12:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.393 11:12:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:39.393 11:12:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:39.394 2024/04/18 11:12:47 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:3bb8c6ef-d892-4471-9d75-40bec4496207], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:21:39.394 request: 00:21:39.394 { 00:21:39.394 "method": "bdev_lvol_get_lvstores", 00:21:39.394 "params": { 00:21:39.394 "uuid": "3bb8c6ef-d892-4471-9d75-40bec4496207" 00:21:39.394 } 00:21:39.394 } 00:21:39.394 Got JSON-RPC error response 00:21:39.394 GoRPCClient: error on JSON-RPC call 00:21:39.394 11:12:47 -- common/autotest_common.sh@641 -- # es=1 00:21:39.394 11:12:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.394 11:12:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.394 11:12:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.394 11:12:47 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:39.959 aio_bdev 00:21:39.959 11:12:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:39.959 11:12:47 -- common/autotest_common.sh@885 -- # local bdev_name=83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:39.959 11:12:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:39.959 11:12:47 -- common/autotest_common.sh@887 -- # local i 00:21:39.959 11:12:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:39.959 11:12:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:39.959 11:12:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:40.216 11:12:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 83921c9e-dd57-40b0-a1c0-d32257b74dcf -t 2000 00:21:40.474 [ 00:21:40.474 { 00:21:40.474 "aliases": [ 00:21:40.474 "lvs/lvol" 00:21:40.474 ], 00:21:40.474 "assigned_rate_limits": { 00:21:40.474 "r_mbytes_per_sec": 0, 00:21:40.474 "rw_ios_per_sec": 0, 00:21:40.474 "rw_mbytes_per_sec": 0, 00:21:40.474 "w_mbytes_per_sec": 0 00:21:40.474 }, 00:21:40.474 "block_size": 4096, 00:21:40.474 "claimed": false, 00:21:40.474 "driver_specific": { 00:21:40.474 "lvol": { 00:21:40.474 "base_bdev": "aio_bdev", 00:21:40.474 "clone": false, 00:21:40.474 "esnap_clone": false, 00:21:40.474 "lvol_store_uuid": "3bb8c6ef-d892-4471-9d75-40bec4496207", 00:21:40.474 "snapshot": false, 00:21:40.474 "thin_provision": false 00:21:40.474 } 00:21:40.474 }, 00:21:40.474 "name": "83921c9e-dd57-40b0-a1c0-d32257b74dcf", 00:21:40.474 "num_blocks": 38912, 00:21:40.474 "product_name": "Logical Volume", 00:21:40.474 "supported_io_types": { 00:21:40.474 "abort": false, 00:21:40.474 "compare": false, 00:21:40.474 "compare_and_write": false, 00:21:40.474 "flush": false, 00:21:40.474 "nvme_admin": false, 00:21:40.474 "nvme_io": false, 00:21:40.474 "read": true, 00:21:40.474 "reset": true, 00:21:40.474 "unmap": true, 00:21:40.474 "write": true, 00:21:40.474 "write_zeroes": true 00:21:40.474 }, 00:21:40.474 "uuid": "83921c9e-dd57-40b0-a1c0-d32257b74dcf", 00:21:40.474 "zoned": false 00:21:40.474 } 00:21:40.474 ] 00:21:40.474 11:12:48 -- common/autotest_common.sh@893 -- # return 0 00:21:40.474 11:12:48 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:40.474 11:12:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:40.731 11:12:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:40.731 11:12:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:40.731 11:12:48 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:40.988 11:12:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:40.988 11:12:48 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 83921c9e-dd57-40b0-a1c0-d32257b74dcf 00:21:41.247 11:12:49 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bb8c6ef-d892-4471-9d75-40bec4496207 00:21:41.505 11:12:49 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:41.763 11:12:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:42.022 ************************************ 00:21:42.022 END TEST lvs_grow_dirty 00:21:42.022 ************************************ 00:21:42.022 00:21:42.022 real 0m21.923s 00:21:42.022 user 0m47.264s 00:21:42.022 sys 0m7.907s 00:21:42.022 11:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:42.022 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:42.022 11:12:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:42.022 11:12:50 -- common/autotest_common.sh@794 -- # type=--id 00:21:42.022 11:12:50 -- common/autotest_common.sh@795 -- # id=0 00:21:42.022 11:12:50 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:42.022 11:12:50 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.022 11:12:50 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:42.022 11:12:50 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:42.022 11:12:50 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:42.022 11:12:50 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.022 nvmf_trace.0 00:21:42.022 11:12:50 -- common/autotest_common.sh@809 -- # return 0 00:21:42.022 11:12:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:42.022 11:12:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:42.022 11:12:50 -- nvmf/common.sh@117 -- # sync 00:21:42.280 11:12:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.280 11:12:50 -- nvmf/common.sh@120 -- # set +e 00:21:42.280 11:12:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.280 11:12:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.280 rmmod nvme_tcp 00:21:42.280 rmmod nvme_fabrics 00:21:42.538 rmmod nvme_keyring 00:21:42.538 11:12:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.538 11:12:50 -- nvmf/common.sh@124 -- # set -e 00:21:42.538 11:12:50 -- nvmf/common.sh@125 -- # return 0 00:21:42.538 11:12:50 -- nvmf/common.sh@478 -- # '[' -n 74624 ']' 00:21:42.538 11:12:50 -- nvmf/common.sh@479 -- # killprocess 74624 00:21:42.538 11:12:50 -- common/autotest_common.sh@936 -- # '[' -z 74624 ']' 00:21:42.538 11:12:50 -- common/autotest_common.sh@940 -- # kill -0 74624 00:21:42.538 11:12:50 -- common/autotest_common.sh@941 -- # uname 00:21:42.538 11:12:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.538 11:12:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74624 00:21:42.538 11:12:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:42.538 11:12:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:42.538 killing process with pid 74624 00:21:42.538 11:12:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74624' 00:21:42.538 11:12:50 -- common/autotest_common.sh@955 -- # kill 74624 00:21:42.538 11:12:50 -- common/autotest_common.sh@960 -- # wait 74624 00:21:43.951 11:12:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:43.951 11:12:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:43.951 11:12:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:43.951 11:12:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.951 11:12:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.951 11:12:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.951 11:12:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.951 11:12:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.951 11:12:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:43.951 00:21:43.951 real 0m45.048s 00:21:43.951 user 1m13.731s 00:21:43.951 sys 0m11.056s 00:21:43.951 11:12:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.951 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.951 ************************************ 00:21:43.951 END TEST nvmf_lvs_grow 00:21:43.951 ************************************ 00:21:43.951 11:12:51 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:43.951 11:12:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:43.951 11:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.951 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.951 ************************************ 00:21:43.951 START TEST nvmf_bdev_io_wait 00:21:43.951 ************************************ 00:21:43.951 11:12:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:43.951 * Looking for test storage... 00:21:43.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:43.951 11:12:52 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.951 11:12:52 -- nvmf/common.sh@7 -- # uname -s 00:21:43.951 11:12:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.951 11:12:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.951 11:12:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.951 11:12:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.951 11:12:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.951 11:12:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.951 11:12:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.951 11:12:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.951 11:12:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.951 11:12:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.951 11:12:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:21:43.951 11:12:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:21:43.951 11:12:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.951 11:12:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.951 11:12:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.951 11:12:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.951 11:12:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.951 11:12:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.951 11:12:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.951 11:12:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.951 11:12:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.951 11:12:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.952 11:12:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.952 11:12:52 -- paths/export.sh@5 -- # export PATH 00:21:43.952 11:12:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.952 11:12:52 -- nvmf/common.sh@47 -- # : 0 00:21:43.952 11:12:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.952 11:12:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.952 11:12:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.952 11:12:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.952 11:12:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.952 11:12:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.952 11:12:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.952 11:12:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.952 11:12:52 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.952 11:12:52 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.952 11:12:52 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:43.952 11:12:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:43.952 11:12:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.952 11:12:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:43.952 11:12:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:43.952 11:12:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:43.952 11:12:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.952 11:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.952 11:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.952 11:12:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:43.952 11:12:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:43.952 11:12:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:43.952 11:12:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:43.952 11:12:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:43.952 11:12:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:43.952 11:12:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.952 11:12:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.952 11:12:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:43.952 11:12:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:43.952 11:12:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.952 11:12:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.952 11:12:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.952 11:12:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.952 11:12:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.952 11:12:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.952 11:12:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.952 11:12:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.952 11:12:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:43.952 11:12:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:43.952 Cannot find device "nvmf_tgt_br" 00:21:43.952 11:12:52 -- nvmf/common.sh@155 -- # true 00:21:43.952 11:12:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.952 Cannot find device "nvmf_tgt_br2" 00:21:43.952 11:12:52 -- nvmf/common.sh@156 -- # true 00:21:43.952 11:12:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:43.952 11:12:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:44.210 Cannot find device "nvmf_tgt_br" 00:21:44.210 11:12:52 -- nvmf/common.sh@158 -- # true 00:21:44.210 11:12:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:44.210 Cannot find device "nvmf_tgt_br2" 00:21:44.210 11:12:52 -- nvmf/common.sh@159 -- # true 00:21:44.210 11:12:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:44.210 11:12:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:44.210 11:12:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.210 11:12:52 -- nvmf/common.sh@162 -- # true 00:21:44.210 11:12:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.210 11:12:52 -- nvmf/common.sh@163 -- # true 00:21:44.210 11:12:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.210 11:12:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.210 11:12:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.210 11:12:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.210 11:12:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.210 11:12:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.210 11:12:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.210 11:12:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:44.210 11:12:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:44.210 11:12:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:44.210 11:12:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:44.210 11:12:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:44.210 11:12:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:44.210 11:12:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.210 11:12:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.210 11:12:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.210 11:12:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:44.210 11:12:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:44.210 11:12:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.210 11:12:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.210 11:12:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.210 11:12:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.210 11:12:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.210 11:12:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:44.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:21:44.210 00:21:44.210 --- 10.0.0.2 ping statistics --- 00:21:44.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.210 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:44.210 11:12:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:44.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:44.210 00:21:44.210 --- 10.0.0.3 ping statistics --- 00:21:44.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.210 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:44.210 11:12:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:44.469 00:21:44.469 --- 10.0.0.1 ping statistics --- 00:21:44.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.469 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:44.469 11:12:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.469 11:12:52 -- nvmf/common.sh@422 -- # return 0 00:21:44.469 11:12:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:44.469 11:12:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.469 11:12:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:44.469 11:12:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:44.469 11:12:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.469 11:12:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:44.469 11:12:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:44.469 11:12:52 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:44.469 11:12:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.469 11:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.469 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:44.469 11:12:52 -- nvmf/common.sh@470 -- # nvmfpid=75060 00:21:44.469 11:12:52 -- nvmf/common.sh@471 -- # waitforlisten 75060 00:21:44.469 11:12:52 -- common/autotest_common.sh@817 -- # '[' -z 75060 ']' 00:21:44.469 11:12:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.469 11:12:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.469 11:12:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.469 11:12:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.469 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:44.469 11:12:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:44.469 [2024-04-18 11:12:52.576148] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:44.469 [2024-04-18 11:12:52.576336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.727 [2024-04-18 11:12:52.758251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.986 [2024-04-18 11:12:53.081142] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.986 [2024-04-18 11:12:53.081253] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.986 [2024-04-18 11:12:53.081307] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.986 [2024-04-18 11:12:53.081320] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.986 [2024-04-18 11:12:53.081334] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.986 [2024-04-18 11:12:53.082166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.986 [2024-04-18 11:12:53.082277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.986 [2024-04-18 11:12:53.082417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.986 [2024-04-18 11:12:53.082436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.551 11:12:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:45.551 11:12:53 -- common/autotest_common.sh@850 -- # return 0 00:21:45.551 11:12:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:45.551 11:12:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.551 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.551 11:12:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.551 11:12:53 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:45.551 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.551 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.551 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.551 11:12:53 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:45.551 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.551 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.810 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.810 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 [2024-04-18 11:12:53.846048] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:45.810 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.810 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 Malloc0 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.810 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.810 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.810 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.810 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.810 11:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.810 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 [2024-04-18 11:12:53.966641] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.810 11:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75120 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@30 -- # READ_PID=75122 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # config=() 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # local subsystem config 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75123 00:21:45.810 11:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:45.810 { 00:21:45.810 "params": { 00:21:45.810 "name": "Nvme$subsystem", 00:21:45.810 "trtype": "$TEST_TRANSPORT", 00:21:45.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.810 "adrfam": "ipv4", 00:21:45.810 "trsvcid": "$NVMF_PORT", 00:21:45.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.810 "hdgst": ${hdgst:-false}, 00:21:45.810 "ddgst": ${ddgst:-false} 00:21:45.810 }, 00:21:45.810 "method": "bdev_nvme_attach_controller" 00:21:45.810 } 00:21:45.810 EOF 00:21:45.810 )") 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75126 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # config=() 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@35 -- # sync 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # local subsystem config 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:45.810 11:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:45.810 { 00:21:45.810 "params": { 00:21:45.810 "name": "Nvme$subsystem", 00:21:45.810 "trtype": "$TEST_TRANSPORT", 00:21:45.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.810 "adrfam": "ipv4", 00:21:45.810 "trsvcid": "$NVMF_PORT", 00:21:45.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.810 "hdgst": ${hdgst:-false}, 00:21:45.810 "ddgst": ${ddgst:-false} 00:21:45.810 }, 00:21:45.810 "method": "bdev_nvme_attach_controller" 00:21:45.810 } 00:21:45.810 EOF 00:21:45.810 )") 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # cat 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # cat 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # config=() 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # local subsystem config 00:21:45.810 11:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:45.810 11:12:53 -- nvmf/common.sh@545 -- # jq . 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:45.810 { 00:21:45.810 "params": { 00:21:45.810 "name": "Nvme$subsystem", 00:21:45.810 "trtype": "$TEST_TRANSPORT", 00:21:45.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.810 "adrfam": "ipv4", 00:21:45.810 "trsvcid": "$NVMF_PORT", 00:21:45.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.810 "hdgst": ${hdgst:-false}, 00:21:45.810 "ddgst": ${ddgst:-false} 00:21:45.810 }, 00:21:45.810 "method": "bdev_nvme_attach_controller" 00:21:45.810 } 00:21:45.810 EOF 00:21:45.810 )") 00:21:45.810 11:12:53 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # config=() 00:21:45.810 11:12:53 -- nvmf/common.sh@521 -- # local subsystem config 00:21:45.810 11:12:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:45.810 11:12:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:45.810 { 00:21:45.810 "params": { 00:21:45.810 "name": "Nvme$subsystem", 00:21:45.810 "trtype": "$TEST_TRANSPORT", 00:21:45.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.810 "adrfam": "ipv4", 00:21:45.810 "trsvcid": "$NVMF_PORT", 00:21:45.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.810 "hdgst": ${hdgst:-false}, 00:21:45.810 "ddgst": ${ddgst:-false} 00:21:45.810 }, 00:21:45.810 "method": "bdev_nvme_attach_controller" 00:21:45.810 } 00:21:45.810 EOF 00:21:45.810 )") 00:21:45.810 11:12:53 -- nvmf/common.sh@546 -- # IFS=, 00:21:45.810 11:12:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:45.810 "params": { 00:21:45.810 "name": "Nvme1", 00:21:45.810 "trtype": "tcp", 00:21:45.810 "traddr": "10.0.0.2", 00:21:45.810 "adrfam": "ipv4", 00:21:45.810 "trsvcid": "4420", 00:21:45.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.810 "hdgst": false, 00:21:45.810 "ddgst": false 00:21:45.810 }, 00:21:45.811 "method": "bdev_nvme_attach_controller" 00:21:45.811 }' 00:21:45.811 11:12:53 -- nvmf/common.sh@543 -- # cat 00:21:45.811 11:12:53 -- nvmf/common.sh@545 -- # jq . 00:21:45.811 11:12:53 -- nvmf/common.sh@543 -- # cat 00:21:45.811 11:12:53 -- nvmf/common.sh@546 -- # IFS=, 00:21:45.811 11:12:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:45.811 "params": { 00:21:45.811 "name": "Nvme1", 00:21:45.811 "trtype": "tcp", 00:21:45.811 "traddr": "10.0.0.2", 00:21:45.811 "adrfam": "ipv4", 00:21:45.811 "trsvcid": "4420", 00:21:45.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.811 "hdgst": false, 00:21:45.811 "ddgst": false 00:21:45.811 }, 00:21:45.811 "method": "bdev_nvme_attach_controller" 00:21:45.811 }' 00:21:45.811 11:12:53 -- nvmf/common.sh@545 -- # jq . 00:21:45.811 11:12:53 -- nvmf/common.sh@546 -- # IFS=, 00:21:45.811 11:12:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:45.811 "params": { 00:21:45.811 "name": "Nvme1", 00:21:45.811 "trtype": "tcp", 00:21:45.811 "traddr": "10.0.0.2", 00:21:45.811 "adrfam": "ipv4", 00:21:45.811 "trsvcid": "4420", 00:21:45.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.811 "hdgst": false, 00:21:45.811 "ddgst": false 00:21:45.811 }, 00:21:45.811 "method": "bdev_nvme_attach_controller" 00:21:45.811 }' 00:21:45.811 11:12:53 -- nvmf/common.sh@545 -- # jq . 00:21:45.811 11:12:54 -- nvmf/common.sh@546 -- # IFS=, 00:21:45.811 11:12:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:45.811 "params": { 00:21:45.811 "name": "Nvme1", 00:21:45.811 "trtype": "tcp", 00:21:45.811 "traddr": "10.0.0.2", 00:21:45.811 "adrfam": "ipv4", 00:21:45.811 "trsvcid": "4420", 00:21:45.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.811 "hdgst": false, 00:21:45.811 "ddgst": false 00:21:45.811 }, 00:21:45.811 "method": "bdev_nvme_attach_controller" 00:21:45.811 }' 00:21:46.069 11:12:54 -- target/bdev_io_wait.sh@37 -- # wait 75120 00:21:46.069 [2024-04-18 11:12:54.104182] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:46.069 [2024-04-18 11:12:54.104379] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:46.069 [2024-04-18 11:12:54.118332] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:46.069 [2024-04-18 11:12:54.119051] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:46.069 [2024-04-18 11:12:54.123247] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:46.069 [2024-04-18 11:12:54.123394] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:46.069 [2024-04-18 11:12:54.146241] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:46.069 [2024-04-18 11:12:54.146379] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:46.328 [2024-04-18 11:12:54.372800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.328 [2024-04-18 11:12:54.448706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.328 [2024-04-18 11:12:54.520309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.586 [2024-04-18 11:12:54.592172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.586 [2024-04-18 11:12:54.638649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:46.586 [2024-04-18 11:12:54.662712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:46.586 [2024-04-18 11:12:54.768730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:46.845 [2024-04-18 11:12:54.813800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.845 Running I/O for 1 seconds... 00:21:46.845 Running I/O for 1 seconds... 00:21:47.104 Running I/O for 1 seconds... 00:21:47.104 Running I/O for 1 seconds... 00:21:48.061 00:21:48.061 Latency(us) 00:21:48.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.061 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:48.061 Nvme1n1 : 1.03 4800.99 18.75 0.00 0.00 26486.52 7149.38 47424.23 00:21:48.061 =================================================================================================================== 00:21:48.061 Total : 4800.99 18.75 0.00 0.00 26486.52 7149.38 47424.23 00:21:48.061 00:21:48.061 Latency(us) 00:21:48.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.061 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:48.061 Nvme1n1 : 1.00 146921.48 573.91 0.00 0.00 867.84 346.30 1995.87 00:21:48.061 =================================================================================================================== 00:21:48.061 Total : 146921.48 573.91 0.00 0.00 867.84 346.30 1995.87 00:21:48.061 00:21:48.061 Latency(us) 00:21:48.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.061 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:48.061 Nvme1n1 : 1.01 6980.04 27.27 0.00 0.00 18232.44 4438.57 35746.91 00:21:48.061 =================================================================================================================== 00:21:48.061 Total : 6980.04 27.27 0.00 0.00 18232.44 4438.57 35746.91 00:21:48.061 00:21:48.061 Latency(us) 00:21:48.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.061 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:48.061 Nvme1n1 : 1.01 4890.62 19.10 0.00 0.00 26047.44 7864.32 56241.80 00:21:48.061 =================================================================================================================== 00:21:48.061 Total : 4890.62 19.10 0.00 0.00 26047.44 7864.32 56241.80 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@38 -- # wait 75122 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@39 -- # wait 75123 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@40 -- # wait 75126 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.453 11:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.453 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:49.453 11:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:49.453 11:12:57 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:49.453 11:12:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:49.453 11:12:57 -- nvmf/common.sh@117 -- # sync 00:21:49.453 11:12:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.453 11:12:57 -- nvmf/common.sh@120 -- # set +e 00:21:49.453 11:12:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.453 11:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.453 rmmod nvme_tcp 00:21:49.453 rmmod nvme_fabrics 00:21:49.453 rmmod nvme_keyring 00:21:49.453 11:12:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.453 11:12:57 -- nvmf/common.sh@124 -- # set -e 00:21:49.453 11:12:57 -- nvmf/common.sh@125 -- # return 0 00:21:49.453 11:12:57 -- nvmf/common.sh@478 -- # '[' -n 75060 ']' 00:21:49.453 11:12:57 -- nvmf/common.sh@479 -- # killprocess 75060 00:21:49.453 11:12:57 -- common/autotest_common.sh@936 -- # '[' -z 75060 ']' 00:21:49.453 11:12:57 -- common/autotest_common.sh@940 -- # kill -0 75060 00:21:49.453 11:12:57 -- common/autotest_common.sh@941 -- # uname 00:21:49.453 11:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:49.453 11:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75060 00:21:49.453 killing process with pid 75060 00:21:49.453 11:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:49.453 11:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:49.453 11:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75060' 00:21:49.453 11:12:57 -- common/autotest_common.sh@955 -- # kill 75060 00:21:49.453 11:12:57 -- common/autotest_common.sh@960 -- # wait 75060 00:21:50.827 11:12:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:50.827 11:12:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:50.827 11:12:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:50.827 11:12:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.827 11:12:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.827 11:12:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.827 11:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.827 11:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.827 11:12:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:50.827 00:21:50.827 real 0m6.795s 00:21:50.827 user 0m30.717s 00:21:50.827 sys 0m2.738s 00:21:50.827 ************************************ 00:21:50.827 END TEST nvmf_bdev_io_wait 00:21:50.827 ************************************ 00:21:50.827 11:12:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:50.827 11:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:50.827 11:12:58 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:50.827 11:12:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:50.827 11:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:50.827 11:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:50.827 ************************************ 00:21:50.827 START TEST nvmf_queue_depth 00:21:50.827 ************************************ 00:21:50.827 11:12:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:50.827 * Looking for test storage... 00:21:50.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:50.827 11:12:58 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.827 11:12:58 -- nvmf/common.sh@7 -- # uname -s 00:21:50.827 11:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.827 11:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.827 11:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.827 11:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.827 11:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.827 11:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.827 11:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.827 11:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.827 11:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.827 11:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.827 11:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:21:50.827 11:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:21:50.827 11:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.827 11:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.827 11:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.828 11:12:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.828 11:12:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.828 11:12:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.828 11:12:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.828 11:12:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.828 11:12:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.828 11:12:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.828 11:12:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.828 11:12:59 -- paths/export.sh@5 -- # export PATH 00:21:50.828 11:12:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.828 11:12:59 -- nvmf/common.sh@47 -- # : 0 00:21:50.828 11:12:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.828 11:12:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.828 11:12:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.828 11:12:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.828 11:12:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.828 11:12:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.828 11:12:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.828 11:12:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.828 11:12:59 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:50.828 11:12:59 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:50.828 11:12:59 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.828 11:12:59 -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:50.828 11:12:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:50.828 11:12:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.828 11:12:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:50.828 11:12:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:50.828 11:12:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:50.828 11:12:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.828 11:12:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.828 11:12:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.828 11:12:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:50.828 11:12:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:50.828 11:12:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:50.828 11:12:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:50.828 11:12:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:50.828 11:12:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:50.828 11:12:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.828 11:12:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.828 11:12:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:50.828 11:12:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:50.828 11:12:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:50.828 11:12:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:50.828 11:12:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:50.828 11:12:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.828 11:12:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:50.828 11:12:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:50.828 11:12:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:50.828 11:12:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:50.828 11:12:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:50.828 11:12:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:50.828 Cannot find device "nvmf_tgt_br" 00:21:50.828 11:12:59 -- nvmf/common.sh@155 -- # true 00:21:50.828 11:12:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.828 Cannot find device "nvmf_tgt_br2" 00:21:50.828 11:12:59 -- nvmf/common.sh@156 -- # true 00:21:50.828 11:12:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:51.086 11:12:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:51.086 Cannot find device "nvmf_tgt_br" 00:21:51.086 11:12:59 -- nvmf/common.sh@158 -- # true 00:21:51.086 11:12:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:51.086 Cannot find device "nvmf_tgt_br2" 00:21:51.086 11:12:59 -- nvmf/common.sh@159 -- # true 00:21:51.086 11:12:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:51.086 11:12:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:51.086 11:12:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.086 11:12:59 -- nvmf/common.sh@162 -- # true 00:21:51.086 11:12:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.086 11:12:59 -- nvmf/common.sh@163 -- # true 00:21:51.086 11:12:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:51.086 11:12:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:51.086 11:12:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:51.086 11:12:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:51.086 11:12:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:51.086 11:12:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:51.086 11:12:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:51.086 11:12:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:51.086 11:12:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:51.086 11:12:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:51.086 11:12:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:51.086 11:12:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:51.086 11:12:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:51.086 11:12:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:51.087 11:12:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:51.087 11:12:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:51.087 11:12:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:51.087 11:12:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:51.087 11:12:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:51.087 11:12:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:51.087 11:12:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:51.087 11:12:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:51.087 11:12:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:51.345 11:12:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:51.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:21:51.345 00:21:51.345 --- 10.0.0.2 ping statistics --- 00:21:51.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.345 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:51.345 11:12:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:51.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:21:51.345 00:21:51.345 --- 10.0.0.3 ping statistics --- 00:21:51.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.345 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:51.345 11:12:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:51.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:51.345 00:21:51.345 --- 10.0.0.1 ping statistics --- 00:21:51.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.345 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:51.345 11:12:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.345 11:12:59 -- nvmf/common.sh@422 -- # return 0 00:21:51.345 11:12:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:51.345 11:12:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.345 11:12:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:51.345 11:12:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:51.345 11:12:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.345 11:12:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:51.345 11:12:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:51.345 11:12:59 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:51.345 11:12:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:51.345 11:12:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:51.345 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:21:51.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.345 11:12:59 -- nvmf/common.sh@470 -- # nvmfpid=75397 00:21:51.345 11:12:59 -- nvmf/common.sh@471 -- # waitforlisten 75397 00:21:51.345 11:12:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:51.345 11:12:59 -- common/autotest_common.sh@817 -- # '[' -z 75397 ']' 00:21:51.345 11:12:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.345 11:12:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:51.345 11:12:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.345 11:12:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:51.345 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:21:51.345 [2024-04-18 11:12:59.445375] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:51.345 [2024-04-18 11:12:59.445536] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.603 [2024-04-18 11:12:59.630863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.861 [2024-04-18 11:12:59.890583] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.861 [2024-04-18 11:12:59.890671] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.861 [2024-04-18 11:12:59.890723] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.861 [2024-04-18 11:12:59.890778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.861 [2024-04-18 11:12:59.890793] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.861 [2024-04-18 11:12:59.890833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.427 11:13:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:52.427 11:13:00 -- common/autotest_common.sh@850 -- # return 0 00:21:52.427 11:13:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:52.427 11:13:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 11:13:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.428 11:13:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.428 11:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 [2024-04-18 11:13:00.446179] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.428 11:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.428 11:13:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.428 11:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 Malloc0 00:21:52.428 11:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.428 11:13:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.428 11:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 11:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.428 11:13:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.428 11:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 11:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.428 11:13:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.428 11:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.428 [2024-04-18 11:13:00.563578] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.428 11:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.428 11:13:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=75447 00:21:52.428 11:13:00 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:52.428 11:13:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.428 11:13:00 -- target/queue_depth.sh@33 -- # waitforlisten 75447 /var/tmp/bdevperf.sock 00:21:52.428 11:13:00 -- common/autotest_common.sh@817 -- # '[' -z 75447 ']' 00:21:52.428 11:13:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.428 11:13:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:52.428 11:13:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.428 11:13:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:52.428 11:13:00 -- common/autotest_common.sh@10 -- # set +x 00:21:52.686 [2024-04-18 11:13:00.679216] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:52.686 [2024-04-18 11:13:00.679666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75447 ] 00:21:52.687 [2024-04-18 11:13:00.857777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.945 [2024-04-18 11:13:01.152989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.512 11:13:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:53.512 11:13:01 -- common/autotest_common.sh@850 -- # return 0 00:21:53.512 11:13:01 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.512 11:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.512 11:13:01 -- common/autotest_common.sh@10 -- # set +x 00:21:53.512 NVMe0n1 00:21:53.512 11:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.512 11:13:01 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:53.771 Running I/O for 10 seconds... 00:22:03.760 00:22:03.760 Latency(us) 00:22:03.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.760 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:03.760 Verification LBA range: start 0x0 length 0x4000 00:22:03.760 NVMe0n1 : 10.09 6399.62 25.00 0.00 0.00 159134.82 21567.30 104857.60 00:22:03.760 =================================================================================================================== 00:22:03.760 Total : 6399.62 25.00 0.00 0.00 159134.82 21567.30 104857.60 00:22:03.760 0 00:22:03.760 11:13:11 -- target/queue_depth.sh@39 -- # killprocess 75447 00:22:03.760 11:13:11 -- common/autotest_common.sh@936 -- # '[' -z 75447 ']' 00:22:03.760 11:13:11 -- common/autotest_common.sh@940 -- # kill -0 75447 00:22:03.760 11:13:11 -- common/autotest_common.sh@941 -- # uname 00:22:03.760 11:13:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:03.760 11:13:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75447 00:22:03.760 11:13:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:03.760 killing process with pid 75447 00:22:03.760 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.760 00:22:03.760 Latency(us) 00:22:03.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.760 =================================================================================================================== 00:22:03.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.760 11:13:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:03.760 11:13:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75447' 00:22:03.760 11:13:11 -- common/autotest_common.sh@955 -- # kill 75447 00:22:03.760 11:13:11 -- common/autotest_common.sh@960 -- # wait 75447 00:22:05.132 11:13:13 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:05.133 11:13:13 -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:05.133 11:13:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:05.133 11:13:13 -- nvmf/common.sh@117 -- # sync 00:22:05.133 11:13:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.133 11:13:13 -- nvmf/common.sh@120 -- # set +e 00:22:05.133 11:13:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.133 11:13:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.133 rmmod nvme_tcp 00:22:05.133 rmmod nvme_fabrics 00:22:05.133 rmmod nvme_keyring 00:22:05.133 11:13:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.133 11:13:13 -- nvmf/common.sh@124 -- # set -e 00:22:05.133 11:13:13 -- nvmf/common.sh@125 -- # return 0 00:22:05.133 11:13:13 -- nvmf/common.sh@478 -- # '[' -n 75397 ']' 00:22:05.133 11:13:13 -- nvmf/common.sh@479 -- # killprocess 75397 00:22:05.133 11:13:13 -- common/autotest_common.sh@936 -- # '[' -z 75397 ']' 00:22:05.133 11:13:13 -- common/autotest_common.sh@940 -- # kill -0 75397 00:22:05.133 11:13:13 -- common/autotest_common.sh@941 -- # uname 00:22:05.133 11:13:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.133 11:13:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75397 00:22:05.133 11:13:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:05.133 killing process with pid 75397 00:22:05.133 11:13:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:05.133 11:13:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75397' 00:22:05.133 11:13:13 -- common/autotest_common.sh@955 -- # kill 75397 00:22:05.133 11:13:13 -- common/autotest_common.sh@960 -- # wait 75397 00:22:07.033 11:13:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:07.033 11:13:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:07.033 11:13:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:07.033 11:13:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.033 11:13:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.033 11:13:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.033 11:13:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.033 11:13:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.033 11:13:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:07.033 ************************************ 00:22:07.033 END TEST nvmf_queue_depth 00:22:07.033 ************************************ 00:22:07.033 00:22:07.033 real 0m15.934s 00:22:07.033 user 0m26.749s 00:22:07.033 sys 0m2.247s 00:22:07.033 11:13:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:07.033 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.033 11:13:14 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:07.033 11:13:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:07.033 11:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:07.033 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:22:07.033 ************************************ 00:22:07.033 START TEST nvmf_multipath 00:22:07.033 ************************************ 00:22:07.033 11:13:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:07.033 * Looking for test storage... 00:22:07.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:07.033 11:13:15 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.033 11:13:15 -- nvmf/common.sh@7 -- # uname -s 00:22:07.033 11:13:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.033 11:13:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.033 11:13:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.033 11:13:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.033 11:13:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.033 11:13:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.033 11:13:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.033 11:13:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.033 11:13:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.033 11:13:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.033 11:13:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:07.033 11:13:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:07.033 11:13:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.033 11:13:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.033 11:13:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.033 11:13:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.033 11:13:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.033 11:13:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.033 11:13:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.033 11:13:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.033 11:13:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.033 11:13:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.033 11:13:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.034 11:13:15 -- paths/export.sh@5 -- # export PATH 00:22:07.034 11:13:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.034 11:13:15 -- nvmf/common.sh@47 -- # : 0 00:22:07.034 11:13:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.034 11:13:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.034 11:13:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.034 11:13:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.034 11:13:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.034 11:13:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.034 11:13:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.034 11:13:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.034 11:13:15 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.034 11:13:15 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.034 11:13:15 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:07.034 11:13:15 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:07.034 11:13:15 -- target/multipath.sh@43 -- # nvmftestinit 00:22:07.034 11:13:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:07.034 11:13:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.034 11:13:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:07.034 11:13:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:07.034 11:13:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:07.034 11:13:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.034 11:13:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.034 11:13:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.034 11:13:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:07.034 11:13:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:07.034 11:13:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:07.034 11:13:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:07.034 11:13:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:07.034 11:13:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:07.034 11:13:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.034 11:13:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.034 11:13:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:07.034 11:13:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:07.034 11:13:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:07.034 11:13:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:07.034 11:13:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:07.034 11:13:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.034 11:13:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:07.034 11:13:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:07.034 11:13:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:07.034 11:13:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:07.034 11:13:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:07.034 11:13:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:07.034 Cannot find device "nvmf_tgt_br" 00:22:07.034 11:13:15 -- nvmf/common.sh@155 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:07.034 Cannot find device "nvmf_tgt_br2" 00:22:07.034 11:13:15 -- nvmf/common.sh@156 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:07.034 11:13:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:07.034 Cannot find device "nvmf_tgt_br" 00:22:07.034 11:13:15 -- nvmf/common.sh@158 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:07.034 Cannot find device "nvmf_tgt_br2" 00:22:07.034 11:13:15 -- nvmf/common.sh@159 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:07.034 11:13:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:07.034 11:13:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:07.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.034 11:13:15 -- nvmf/common.sh@162 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:07.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.034 11:13:15 -- nvmf/common.sh@163 -- # true 00:22:07.034 11:13:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:07.034 11:13:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:07.034 11:13:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:07.034 11:13:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:07.034 11:13:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:07.034 11:13:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:07.291 11:13:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:07.291 11:13:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:07.291 11:13:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:07.291 11:13:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:07.291 11:13:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:07.291 11:13:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:07.291 11:13:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:07.291 11:13:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:07.291 11:13:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:07.291 11:13:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:07.291 11:13:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:07.291 11:13:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:07.291 11:13:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:07.291 11:13:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:07.291 11:13:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:07.291 11:13:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:07.291 11:13:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:07.291 11:13:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:07.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:07.291 00:22:07.291 --- 10.0.0.2 ping statistics --- 00:22:07.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.291 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:07.291 11:13:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:07.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:07.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:22:07.291 00:22:07.291 --- 10.0.0.3 ping statistics --- 00:22:07.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.291 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:07.291 11:13:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:07.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:07.291 00:22:07.291 --- 10.0.0.1 ping statistics --- 00:22:07.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.291 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:07.291 11:13:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.291 11:13:15 -- nvmf/common.sh@422 -- # return 0 00:22:07.291 11:13:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:07.291 11:13:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.291 11:13:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:07.291 11:13:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:07.291 11:13:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.291 11:13:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:07.291 11:13:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:07.291 11:13:15 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:22:07.291 11:13:15 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:22:07.291 11:13:15 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:22:07.291 11:13:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:07.291 11:13:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:07.291 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:07.291 11:13:15 -- nvmf/common.sh@470 -- # nvmfpid=75814 00:22:07.291 11:13:15 -- nvmf/common.sh@471 -- # waitforlisten 75814 00:22:07.291 11:13:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.291 11:13:15 -- common/autotest_common.sh@817 -- # '[' -z 75814 ']' 00:22:07.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.291 11:13:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.291 11:13:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.291 11:13:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.291 11:13:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.291 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:07.549 [2024-04-18 11:13:15.543819] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:07.549 [2024-04-18 11:13:15.544014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.549 [2024-04-18 11:13:15.729061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.116 [2024-04-18 11:13:16.031768] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.116 [2024-04-18 11:13:16.031864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.116 [2024-04-18 11:13:16.031932] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.116 [2024-04-18 11:13:16.031951] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.116 [2024-04-18 11:13:16.031969] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.116 [2024-04-18 11:13:16.032809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.116 [2024-04-18 11:13:16.032930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.116 [2024-04-18 11:13:16.033076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.116 [2024-04-18 11:13:16.033124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.374 11:13:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.374 11:13:16 -- common/autotest_common.sh@850 -- # return 0 00:22:08.374 11:13:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:08.374 11:13:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:08.374 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:08.374 11:13:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.374 11:13:16 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:08.631 [2024-04-18 11:13:16.768254] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.631 11:13:16 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:09.197 Malloc0 00:22:09.197 11:13:17 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:22:09.455 11:13:17 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.455 11:13:17 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.713 [2024-04-18 11:13:17.874680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.713 11:13:17 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:09.971 [2024-04-18 11:13:18.150958] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:09.971 11:13:18 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:22:10.228 11:13:18 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:22:10.486 11:13:18 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:22:10.486 11:13:18 -- common/autotest_common.sh@1184 -- # local i=0 00:22:10.486 11:13:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.486 11:13:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:10.486 11:13:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:12.385 11:13:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:12.385 11:13:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:12.385 11:13:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:12.663 11:13:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:12.663 11:13:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.663 11:13:20 -- common/autotest_common.sh@1194 -- # return 0 00:22:12.663 11:13:20 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:22:12.663 11:13:20 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:22:12.663 11:13:20 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:22:12.663 11:13:20 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:22:12.663 11:13:20 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:22:12.663 11:13:20 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:22:12.663 11:13:20 -- target/multipath.sh@38 -- # return 0 00:22:12.663 11:13:20 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:22:12.663 11:13:20 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:22:12.663 11:13:20 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:22:12.664 11:13:20 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:22:12.664 11:13:20 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:22:12.664 11:13:20 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:22:12.664 11:13:20 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:22:12.664 11:13:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:22:12.664 11:13:20 -- target/multipath.sh@22 -- # local timeout=20 00:22:12.664 11:13:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:12.664 11:13:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:12.664 11:13:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:12.664 11:13:20 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:22:12.664 11:13:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:22:12.664 11:13:20 -- target/multipath.sh@22 -- # local timeout=20 00:22:12.664 11:13:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:12.664 11:13:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:12.664 11:13:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:12.664 11:13:20 -- target/multipath.sh@85 -- # echo numa 00:22:12.664 11:13:20 -- target/multipath.sh@88 -- # fio_pid=75952 00:22:12.664 11:13:20 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:22:12.664 11:13:20 -- target/multipath.sh@90 -- # sleep 1 00:22:12.664 [global] 00:22:12.664 thread=1 00:22:12.664 invalidate=1 00:22:12.664 rw=randrw 00:22:12.664 time_based=1 00:22:12.664 runtime=6 00:22:12.664 ioengine=libaio 00:22:12.664 direct=1 00:22:12.664 bs=4096 00:22:12.664 iodepth=128 00:22:12.664 norandommap=0 00:22:12.664 numjobs=1 00:22:12.664 00:22:12.664 verify_dump=1 00:22:12.664 verify_backlog=512 00:22:12.664 verify_state_save=0 00:22:12.664 do_verify=1 00:22:12.664 verify=crc32c-intel 00:22:12.664 [job0] 00:22:12.664 filename=/dev/nvme0n1 00:22:12.664 Could not set queue depth (nvme0n1) 00:22:12.664 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:12.664 fio-3.35 00:22:12.664 Starting 1 thread 00:22:13.597 11:13:21 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:13.855 11:13:21 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:14.113 11:13:22 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:22:14.113 11:13:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:22:14.113 11:13:22 -- target/multipath.sh@22 -- # local timeout=20 00:22:14.113 11:13:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:14.113 11:13:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:14.113 11:13:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:14.113 11:13:22 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:22:14.113 11:13:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:22:14.113 11:13:22 -- target/multipath.sh@22 -- # local timeout=20 00:22:14.113 11:13:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:14.113 11:13:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:14.113 11:13:22 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:14.113 11:13:22 -- target/multipath.sh@25 -- # sleep 1s 00:22:15.487 11:13:23 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:15.487 11:13:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:15.487 11:13:23 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:15.487 11:13:23 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:15.487 11:13:23 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:15.745 11:13:23 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:22:15.745 11:13:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:22:15.745 11:13:23 -- target/multipath.sh@22 -- # local timeout=20 00:22:15.745 11:13:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:15.745 11:13:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:15.745 11:13:23 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:15.745 11:13:23 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:22:15.745 11:13:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:22:15.745 11:13:23 -- target/multipath.sh@22 -- # local timeout=20 00:22:15.745 11:13:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:15.745 11:13:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:15.745 11:13:23 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:15.745 11:13:23 -- target/multipath.sh@25 -- # sleep 1s 00:22:16.746 11:13:24 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:16.746 11:13:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:16.746 11:13:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:16.746 11:13:24 -- target/multipath.sh@104 -- # wait 75952 00:22:19.273 00:22:19.273 job0: (groupid=0, jobs=1): err= 0: pid=75973: Thu Apr 18 11:13:26 2024 00:22:19.273 read: IOPS=8190, BW=32.0MiB/s (33.5MB/s)(192MiB/6005msec) 00:22:19.273 slat (usec): min=4, max=10895, avg=73.50, stdev=346.18 00:22:19.273 clat (usec): min=2724, max=24128, avg=10767.05, stdev=1887.53 00:22:19.273 lat (usec): min=2912, max=24148, avg=10840.55, stdev=1902.74 00:22:19.273 clat percentiles (usec): 00:22:19.273 | 1.00th=[ 6128], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9634], 00:22:19.273 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:22:19.273 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12911], 95.00th=[14484], 00:22:19.273 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19792], 99.95th=[20317], 00:22:19.273 | 99.99th=[21365] 00:22:19.273 bw ( KiB/s): min= 4944, max=21528, per=53.80%, avg=17627.36, stdev=4529.20, samples=11 00:22:19.273 iops : min= 1236, max= 5382, avg=4406.82, stdev=1132.30, samples=11 00:22:19.273 write: IOPS=4663, BW=18.2MiB/s (19.1MB/s)(96.9MiB/5320msec); 0 zone resets 00:22:19.273 slat (usec): min=5, max=2702, avg=84.76, stdev=232.22 00:22:19.273 clat (usec): min=4139, max=20772, avg=9363.26, stdev=1552.85 00:22:19.273 lat (usec): min=4182, max=20807, avg=9448.02, stdev=1560.45 00:22:19.273 clat percentiles (usec): 00:22:19.273 | 1.00th=[ 4948], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8455], 00:22:19.273 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:22:19.273 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10814], 95.00th=[12125], 00:22:19.273 | 99.00th=[14353], 99.50th=[15270], 99.90th=[17433], 99.95th=[17957], 00:22:19.273 | 99.99th=[20579] 00:22:19.273 bw ( KiB/s): min= 5160, max=21008, per=94.35%, avg=17600.45, stdev=4404.65, samples=11 00:22:19.273 iops : min= 1290, max= 5252, avg=4400.09, stdev=1101.16, samples=11 00:22:19.273 lat (msec) : 4=0.01%, 10=48.40%, 20=51.54%, 50=0.05% 00:22:19.273 cpu : usr=4.38%, sys=19.15%, ctx=4657, majf=0, minf=84 00:22:19.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:19.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:19.273 issued rwts: total=49184,24809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:19.273 00:22:19.273 Run status group 0 (all jobs): 00:22:19.273 READ: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=192MiB (201MB), run=6005-6005msec 00:22:19.273 WRITE: bw=18.2MiB/s (19.1MB/s), 18.2MiB/s-18.2MiB/s (19.1MB/s-19.1MB/s), io=96.9MiB (102MB), run=5320-5320msec 00:22:19.273 00:22:19.273 Disk stats (read/write): 00:22:19.273 nvme0n1: ios=47980/24809, merge=0/0, ticks=487788/218677, in_queue=706465, util=98.61% 00:22:19.273 11:13:26 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:19.273 11:13:27 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:19.530 11:13:27 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:22:19.530 11:13:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:22:19.530 11:13:27 -- target/multipath.sh@22 -- # local timeout=20 00:22:19.530 11:13:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:19.530 11:13:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:19.530 11:13:27 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:19.530 11:13:27 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:22:19.530 11:13:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:22:19.530 11:13:27 -- target/multipath.sh@22 -- # local timeout=20 00:22:19.530 11:13:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:19.530 11:13:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:19.530 11:13:27 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:22:19.530 11:13:27 -- target/multipath.sh@25 -- # sleep 1s 00:22:20.463 11:13:28 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:20.464 11:13:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:20.464 11:13:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:20.464 11:13:28 -- target/multipath.sh@113 -- # echo round-robin 00:22:20.464 11:13:28 -- target/multipath.sh@116 -- # fio_pid=76103 00:22:20.464 11:13:28 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:22:20.464 11:13:28 -- target/multipath.sh@118 -- # sleep 1 00:22:20.464 [global] 00:22:20.464 thread=1 00:22:20.464 invalidate=1 00:22:20.464 rw=randrw 00:22:20.464 time_based=1 00:22:20.464 runtime=6 00:22:20.464 ioengine=libaio 00:22:20.464 direct=1 00:22:20.464 bs=4096 00:22:20.464 iodepth=128 00:22:20.464 norandommap=0 00:22:20.464 numjobs=1 00:22:20.464 00:22:20.464 verify_dump=1 00:22:20.464 verify_backlog=512 00:22:20.464 verify_state_save=0 00:22:20.464 do_verify=1 00:22:20.464 verify=crc32c-intel 00:22:20.464 [job0] 00:22:20.464 filename=/dev/nvme0n1 00:22:20.464 Could not set queue depth (nvme0n1) 00:22:20.806 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:20.806 fio-3.35 00:22:20.806 Starting 1 thread 00:22:21.373 11:13:29 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:21.938 11:13:29 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:21.938 11:13:30 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:22:21.938 11:13:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:22:21.938 11:13:30 -- target/multipath.sh@22 -- # local timeout=20 00:22:21.938 11:13:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:21.938 11:13:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:21.938 11:13:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:21.938 11:13:30 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:22:21.938 11:13:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:22:21.938 11:13:30 -- target/multipath.sh@22 -- # local timeout=20 00:22:21.938 11:13:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:21.938 11:13:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:21.938 11:13:30 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:21.938 11:13:30 -- target/multipath.sh@25 -- # sleep 1s 00:22:23.311 11:13:31 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:23.311 11:13:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:23.311 11:13:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:23.311 11:13:31 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:23.311 11:13:31 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:23.570 11:13:31 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:22:23.570 11:13:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:22:23.570 11:13:31 -- target/multipath.sh@22 -- # local timeout=20 00:22:23.570 11:13:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:23.570 11:13:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:23.570 11:13:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:23.570 11:13:31 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:22:23.570 11:13:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:22:23.570 11:13:31 -- target/multipath.sh@22 -- # local timeout=20 00:22:23.570 11:13:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:23.570 11:13:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:23.570 11:13:31 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:23.570 11:13:31 -- target/multipath.sh@25 -- # sleep 1s 00:22:24.503 11:13:32 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:24.503 11:13:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:24.503 11:13:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:24.503 11:13:32 -- target/multipath.sh@132 -- # wait 76103 00:22:27.040 00:22:27.040 job0: (groupid=0, jobs=1): err= 0: pid=76124: Thu Apr 18 11:13:34 2024 00:22:27.040 read: IOPS=9239, BW=36.1MiB/s (37.8MB/s)(217MiB/6008msec) 00:22:27.040 slat (usec): min=3, max=6914, avg=55.14, stdev=283.27 00:22:27.040 clat (usec): min=473, max=19635, avg=9520.35, stdev=2458.29 00:22:27.040 lat (usec): min=496, max=19646, avg=9575.49, stdev=2484.87 00:22:27.040 clat percentiles (usec): 00:22:27.040 | 1.00th=[ 3261], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 7439], 00:22:27.040 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:22:27.040 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11994], 95.00th=[13042], 00:22:27.040 | 99.00th=[15926], 99.50th=[16712], 99.90th=[17695], 99.95th=[18220], 00:22:27.040 | 99.99th=[19006] 00:22:27.040 bw ( KiB/s): min=10456, max=33848, per=52.02%, avg=19224.67, stdev=6344.73, samples=12 00:22:27.040 iops : min= 2614, max= 8462, avg=4806.17, stdev=1586.18, samples=12 00:22:27.040 write: IOPS=5381, BW=21.0MiB/s (22.0MB/s)(113MiB/5378msec); 0 zone resets 00:22:27.040 slat (usec): min=12, max=3489, avg=67.57, stdev=189.07 00:22:27.040 clat (usec): min=560, max=19340, avg=8094.25, stdev=2488.55 00:22:27.040 lat (usec): min=587, max=19367, avg=8161.83, stdev=2511.06 00:22:27.040 clat percentiles (usec): 00:22:27.040 | 1.00th=[ 2540], 5.00th=[ 3720], 10.00th=[ 4424], 20.00th=[ 5407], 00:22:27.040 | 30.00th=[ 6587], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9372], 00:22:27.040 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:22:27.040 | 99.00th=[14091], 99.50th=[14746], 99.90th=[16581], 99.95th=[16909], 00:22:27.040 | 99.99th=[18220] 00:22:27.040 bw ( KiB/s): min=11128, max=33240, per=89.49%, avg=19263.33, stdev=6222.27, samples=12 00:22:27.040 iops : min= 2782, max= 8310, avg=4815.83, stdev=1555.57, samples=12 00:22:27.040 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:22:27.040 lat (msec) : 2=0.21%, 4=3.51%, 10=60.87%, 20=35.40% 00:22:27.040 cpu : usr=5.01%, sys=20.76%, ctx=5412, majf=0, minf=60 00:22:27.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:27.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:27.040 issued rwts: total=55509,28942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:27.040 00:22:27.040 Run status group 0 (all jobs): 00:22:27.040 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=217MiB (227MB), run=6008-6008msec 00:22:27.040 WRITE: bw=21.0MiB/s (22.0MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=113MiB (119MB), run=5378-5378msec 00:22:27.040 00:22:27.040 Disk stats (read/write): 00:22:27.040 nvme0n1: ios=54706/28487, merge=0/0, ticks=492015/215667, in_queue=707682, util=98.60% 00:22:27.040 11:13:34 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:27.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:27.040 11:13:34 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:27.040 11:13:34 -- common/autotest_common.sh@1205 -- # local i=0 00:22:27.040 11:13:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:27.040 11:13:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:27.040 11:13:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:27.040 11:13:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:27.040 11:13:34 -- common/autotest_common.sh@1217 -- # return 0 00:22:27.040 11:13:34 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.040 11:13:35 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:22:27.040 11:13:35 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:22:27.040 11:13:35 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:22:27.040 11:13:35 -- target/multipath.sh@144 -- # nvmftestfini 00:22:27.040 11:13:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:27.040 11:13:35 -- nvmf/common.sh@117 -- # sync 00:22:27.299 11:13:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.299 11:13:35 -- nvmf/common.sh@120 -- # set +e 00:22:27.299 11:13:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.299 11:13:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.299 rmmod nvme_tcp 00:22:27.299 rmmod nvme_fabrics 00:22:27.299 rmmod nvme_keyring 00:22:27.299 11:13:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.299 11:13:35 -- nvmf/common.sh@124 -- # set -e 00:22:27.299 11:13:35 -- nvmf/common.sh@125 -- # return 0 00:22:27.299 11:13:35 -- nvmf/common.sh@478 -- # '[' -n 75814 ']' 00:22:27.299 11:13:35 -- nvmf/common.sh@479 -- # killprocess 75814 00:22:27.299 11:13:35 -- common/autotest_common.sh@936 -- # '[' -z 75814 ']' 00:22:27.299 11:13:35 -- common/autotest_common.sh@940 -- # kill -0 75814 00:22:27.299 11:13:35 -- common/autotest_common.sh@941 -- # uname 00:22:27.299 11:13:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.299 11:13:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75814 00:22:27.299 killing process with pid 75814 00:22:27.299 11:13:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:27.299 11:13:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:27.299 11:13:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75814' 00:22:27.299 11:13:35 -- common/autotest_common.sh@955 -- # kill 75814 00:22:27.299 11:13:35 -- common/autotest_common.sh@960 -- # wait 75814 00:22:28.674 11:13:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:28.674 11:13:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:28.674 11:13:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:28.674 11:13:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.674 11:13:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.674 11:13:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.674 11:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.674 11:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.674 11:13:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:28.674 ************************************ 00:22:28.674 END TEST nvmf_multipath 00:22:28.674 ************************************ 00:22:28.674 00:22:28.674 real 0m21.801s 00:22:28.674 user 1m23.554s 00:22:28.674 sys 0m6.061s 00:22:28.674 11:13:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:28.674 11:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:28.674 11:13:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:28.674 11:13:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:28.674 11:13:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.674 11:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:28.674 ************************************ 00:22:28.674 START TEST nvmf_zcopy 00:22:28.674 ************************************ 00:22:28.674 11:13:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:28.933 * Looking for test storage... 00:22:28.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:28.933 11:13:36 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.933 11:13:36 -- nvmf/common.sh@7 -- # uname -s 00:22:28.933 11:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.933 11:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.933 11:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.933 11:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.933 11:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.933 11:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.933 11:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.933 11:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.933 11:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.933 11:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.933 11:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:28.933 11:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:28.933 11:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.933 11:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.933 11:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.933 11:13:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.933 11:13:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.933 11:13:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.933 11:13:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.933 11:13:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.933 11:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.933 11:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.933 11:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.933 11:13:36 -- paths/export.sh@5 -- # export PATH 00:22:28.933 11:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.933 11:13:36 -- nvmf/common.sh@47 -- # : 0 00:22:28.933 11:13:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.933 11:13:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.933 11:13:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.933 11:13:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.933 11:13:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.933 11:13:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.933 11:13:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.933 11:13:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.933 11:13:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:28.933 11:13:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:28.933 11:13:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.933 11:13:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:28.933 11:13:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:28.933 11:13:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:28.933 11:13:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.933 11:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.933 11:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.933 11:13:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:28.933 11:13:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:28.933 11:13:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:28.934 11:13:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:28.934 11:13:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:28.934 11:13:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:28.934 11:13:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.934 11:13:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.934 11:13:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:28.934 11:13:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:28.934 11:13:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.934 11:13:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.934 11:13:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.934 11:13:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.934 11:13:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.934 11:13:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.934 11:13:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.934 11:13:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.934 11:13:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:28.934 11:13:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:28.934 Cannot find device "nvmf_tgt_br" 00:22:28.934 11:13:37 -- nvmf/common.sh@155 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.934 Cannot find device "nvmf_tgt_br2" 00:22:28.934 11:13:37 -- nvmf/common.sh@156 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:28.934 11:13:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:28.934 Cannot find device "nvmf_tgt_br" 00:22:28.934 11:13:37 -- nvmf/common.sh@158 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:28.934 Cannot find device "nvmf_tgt_br2" 00:22:28.934 11:13:37 -- nvmf/common.sh@159 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:28.934 11:13:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:28.934 11:13:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.934 11:13:37 -- nvmf/common.sh@162 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:28.934 11:13:37 -- nvmf/common.sh@163 -- # true 00:22:28.934 11:13:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:28.934 11:13:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:28.934 11:13:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:28.934 11:13:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:28.934 11:13:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:29.192 11:13:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:29.192 11:13:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:29.192 11:13:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:29.192 11:13:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:29.192 11:13:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:29.192 11:13:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:29.192 11:13:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:29.192 11:13:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:29.192 11:13:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:29.192 11:13:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:29.192 11:13:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:29.192 11:13:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:29.192 11:13:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:29.192 11:13:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:29.192 11:13:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:29.192 11:13:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:29.192 11:13:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:29.192 11:13:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:29.192 11:13:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:29.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:29.192 00:22:29.192 --- 10.0.0.2 ping statistics --- 00:22:29.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.192 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:29.192 11:13:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:29.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:22:29.192 00:22:29.192 --- 10.0.0.3 ping statistics --- 00:22:29.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.192 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:29.192 11:13:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:29.192 00:22:29.192 --- 10.0.0.1 ping statistics --- 00:22:29.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.192 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:29.192 11:13:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.192 11:13:37 -- nvmf/common.sh@422 -- # return 0 00:22:29.192 11:13:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:29.192 11:13:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.192 11:13:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:29.192 11:13:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:29.192 11:13:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.192 11:13:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:29.192 11:13:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:29.192 11:13:37 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:29.192 11:13:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:29.192 11:13:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:29.192 11:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:29.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.192 11:13:37 -- nvmf/common.sh@470 -- # nvmfpid=76422 00:22:29.192 11:13:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.192 11:13:37 -- nvmf/common.sh@471 -- # waitforlisten 76422 00:22:29.192 11:13:37 -- common/autotest_common.sh@817 -- # '[' -z 76422 ']' 00:22:29.192 11:13:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.192 11:13:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:29.193 11:13:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.193 11:13:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:29.193 11:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:29.450 [2024-04-18 11:13:37.457416] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:29.450 [2024-04-18 11:13:37.457900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.450 [2024-04-18 11:13:37.636574] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.715 [2024-04-18 11:13:37.907685] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.715 [2024-04-18 11:13:37.908076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.715 [2024-04-18 11:13:37.908294] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.715 [2024-04-18 11:13:37.908449] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.715 [2024-04-18 11:13:37.908668] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.715 [2024-04-18 11:13:37.908727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.296 11:13:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:30.296 11:13:38 -- common/autotest_common.sh@850 -- # return 0 00:22:30.296 11:13:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:30.296 11:13:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:30.296 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.296 11:13:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.296 11:13:38 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:30.296 11:13:38 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:30.296 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.296 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.296 [2024-04-18 11:13:38.386639] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.296 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.296 11:13:38 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:30.296 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.296 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.296 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.296 11:13:38 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.296 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.296 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.296 [2024-04-18 11:13:38.402804] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.296 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.296 11:13:38 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.296 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.296 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.296 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.296 11:13:38 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:30.297 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.297 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.297 malloc0 00:22:30.297 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.297 11:13:38 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.297 11:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.297 11:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:30.297 11:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.297 11:13:38 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:30.297 11:13:38 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:30.297 11:13:38 -- nvmf/common.sh@521 -- # config=() 00:22:30.297 11:13:38 -- nvmf/common.sh@521 -- # local subsystem config 00:22:30.297 11:13:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:30.297 11:13:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:30.297 { 00:22:30.297 "params": { 00:22:30.297 "name": "Nvme$subsystem", 00:22:30.297 "trtype": "$TEST_TRANSPORT", 00:22:30.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.297 "adrfam": "ipv4", 00:22:30.297 "trsvcid": "$NVMF_PORT", 00:22:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.297 "hdgst": ${hdgst:-false}, 00:22:30.297 "ddgst": ${ddgst:-false} 00:22:30.297 }, 00:22:30.297 "method": "bdev_nvme_attach_controller" 00:22:30.297 } 00:22:30.297 EOF 00:22:30.297 )") 00:22:30.297 11:13:38 -- nvmf/common.sh@543 -- # cat 00:22:30.297 11:13:38 -- nvmf/common.sh@545 -- # jq . 00:22:30.297 11:13:38 -- nvmf/common.sh@546 -- # IFS=, 00:22:30.297 11:13:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:30.297 "params": { 00:22:30.297 "name": "Nvme1", 00:22:30.297 "trtype": "tcp", 00:22:30.297 "traddr": "10.0.0.2", 00:22:30.297 "adrfam": "ipv4", 00:22:30.297 "trsvcid": "4420", 00:22:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.297 "hdgst": false, 00:22:30.297 "ddgst": false 00:22:30.297 }, 00:22:30.297 "method": "bdev_nvme_attach_controller" 00:22:30.297 }' 00:22:30.556 [2024-04-18 11:13:38.576901] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:30.556 [2024-04-18 11:13:38.577065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76473 ] 00:22:30.556 [2024-04-18 11:13:38.753368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.122 [2024-04-18 11:13:39.043226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.380 Running I/O for 10 seconds... 00:22:41.350 00:22:41.350 Latency(us) 00:22:41.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.350 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:41.350 Verification LBA range: start 0x0 length 0x1000 00:22:41.350 Nvme1n1 : 10.02 4328.48 33.82 0.00 0.00 29486.88 1064.96 39083.29 00:22:41.350 =================================================================================================================== 00:22:41.350 Total : 4328.48 33.82 0.00 0.00 29486.88 1064.96 39083.29 00:22:42.725 11:13:50 -- target/zcopy.sh@39 -- # perfpid=76607 00:22:42.725 11:13:50 -- target/zcopy.sh@41 -- # xtrace_disable 00:22:42.725 11:13:50 -- common/autotest_common.sh@10 -- # set +x 00:22:42.725 11:13:50 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:42.725 11:13:50 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:42.725 11:13:50 -- nvmf/common.sh@521 -- # config=() 00:22:42.725 11:13:50 -- nvmf/common.sh@521 -- # local subsystem config 00:22:42.725 11:13:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:42.725 11:13:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:42.725 { 00:22:42.725 "params": { 00:22:42.725 "name": "Nvme$subsystem", 00:22:42.725 "trtype": "$TEST_TRANSPORT", 00:22:42.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.725 "adrfam": "ipv4", 00:22:42.725 "trsvcid": "$NVMF_PORT", 00:22:42.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.725 "hdgst": ${hdgst:-false}, 00:22:42.725 "ddgst": ${ddgst:-false} 00:22:42.725 }, 00:22:42.725 "method": "bdev_nvme_attach_controller" 00:22:42.725 } 00:22:42.725 EOF 00:22:42.725 )") 00:22:42.725 11:13:50 -- nvmf/common.sh@543 -- # cat 00:22:42.725 [2024-04-18 11:13:50.692481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.692546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 11:13:50 -- nvmf/common.sh@545 -- # jq . 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 11:13:50 -- nvmf/common.sh@546 -- # IFS=, 00:22:42.725 11:13:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:42.725 "params": { 00:22:42.725 "name": "Nvme1", 00:22:42.725 "trtype": "tcp", 00:22:42.725 "traddr": "10.0.0.2", 00:22:42.725 "adrfam": "ipv4", 00:22:42.725 "trsvcid": "4420", 00:22:42.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.725 "hdgst": false, 00:22:42.725 "ddgst": false 00:22:42.725 }, 00:22:42.725 "method": "bdev_nvme_attach_controller" 00:22:42.725 }' 00:22:42.725 [2024-04-18 11:13:50.704452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.704505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.716443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.716509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.728419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.728476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.740433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.740495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.752400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.752445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.764381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.764427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.776448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.776509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.725 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.725 [2024-04-18 11:13:50.784990] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:42.725 [2024-04-18 11:13:50.785323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76607 ] 00:22:42.725 [2024-04-18 11:13:50.788444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.725 [2024-04-18 11:13:50.788651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.800455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.800510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.812449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.812528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.824448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.824507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.836462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.836534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.848487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.848546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.860401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.860447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.872434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.872478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.880417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.880461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.888395] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.888438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.900427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.900494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.908402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.908456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.920431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.920475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.932459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.932501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.726 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.726 [2024-04-18 11:13:50.944418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.726 [2024-04-18 11:13:50.944464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:50.955281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.985 [2024-04-18 11:13:50.956443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:50.956499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:50.968507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:50.968563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:50.980449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:50.980496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:50.992457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:50.992500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.004435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.004489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.016466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.016521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.024465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.024508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.036458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.036500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.048512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.048563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.060573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.060629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.072565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.072622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.084548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.084603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.096530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.096576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.985 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.985 [2024-04-18 11:13:51.108542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.985 [2024-04-18 11:13:51.108594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.120537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.120579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.132482] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.132524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.144507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.144549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.156517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.156560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.168495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.168536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.180516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.180559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.192558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.192620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.986 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:42.986 [2024-04-18 11:13:51.204580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.986 [2024-04-18 11:13:51.204672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.216649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.216702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.221876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.245 [2024-04-18 11:13:51.228588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.228634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.240610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.240668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.252623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.252707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.264528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.264568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.276574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.276616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.288539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.288586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.300619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.300674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.245 [2024-04-18 11:13:51.312637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.245 [2024-04-18 11:13:51.312694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.245 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.324619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.324714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.336689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.336742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.348594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.348637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.360599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.360641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.372673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.372724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.384628] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.384692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.396679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.396729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.408671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.408728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.420644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.420690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.432693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.432749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.444673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.444717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.246 [2024-04-18 11:13:51.456687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.246 [2024-04-18 11:13:51.456738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.246 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.504 [2024-04-18 11:13:51.468680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.504 [2024-04-18 11:13:51.468733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.504 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.504 [2024-04-18 11:13:51.480643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.504 [2024-04-18 11:13:51.480704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.504 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.492649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.492692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.504702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.504742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.516633] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.516674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.528662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.528705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.540700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.540746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.552737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.552799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.564743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.564814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.576715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.576777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.588789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.588855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.600865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.600935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.612772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.612838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.624821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.624888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.637575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.637645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.648813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.648876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 Running I/O for 5 seconds... 00:22:43.505 [2024-04-18 11:13:51.668333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.668402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.686418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.686476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.703100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.703199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.505 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.505 [2024-04-18 11:13:51.720515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.505 [2024-04-18 11:13:51.720592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.736635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.736701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.750097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.750168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.768758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.768850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.786804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.786863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.804944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.804997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.822465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.822522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.838710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.838772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.852167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.852223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.868114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.868169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.886487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.886541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.905521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.905576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.923308] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.923369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.937429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.937487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.954553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.954609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:43.764 [2024-04-18 11:13:51.972516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.764 [2024-04-18 11:13:51.972572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.764 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:51.989998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:51.990082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.007969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.008038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.021223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.021291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.039756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.039831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.057510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.057581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.075806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.075874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.093516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.093586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.106878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.106949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.125462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.125549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.023 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.023 [2024-04-18 11:13:52.144328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.023 [2024-04-18 11:13:52.144414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.024 [2024-04-18 11:13:52.161979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.024 [2024-04-18 11:13:52.162060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.024 [2024-04-18 11:13:52.188101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.024 [2024-04-18 11:13:52.188202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.024 [2024-04-18 11:13:52.202729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.024 [2024-04-18 11:13:52.202802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.024 [2024-04-18 11:13:52.220181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.024 [2024-04-18 11:13:52.220238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.024 [2024-04-18 11:13:52.238307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.024 [2024-04-18 11:13:52.238385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.024 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.255503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.255588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.271832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.271911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.289301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.289384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.306099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.306192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.318847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.318934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.335141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.335232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.350128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.282 [2024-04-18 11:13:52.350221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.282 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.282 [2024-04-18 11:13:52.368201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.368298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.382576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.382645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.400730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.400804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.419451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.419533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.436860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.436933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.453505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.453573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.466827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.466891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.485905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.485993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.283 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.283 [2024-04-18 11:13:52.500822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.283 [2024-04-18 11:13:52.500883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.518589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.518655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.536872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.536949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.554913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.554981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.571635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.571703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.585034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.585100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.604425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.604502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.621769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.621831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.638329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.638391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.542 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.542 [2024-04-18 11:13:52.657277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.542 [2024-04-18 11:13:52.657351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.670956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.671021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.690529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.690612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.709156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.709239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.726650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.726716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.743269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.743346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.543 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.543 [2024-04-18 11:13:52.761357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.543 [2024-04-18 11:13:52.761435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.774848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.774920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.794387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.794477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.812531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.812602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.826585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.826646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.846932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.847003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.863839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.863911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.880709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.880784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.897640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.897717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.911480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.911560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.930572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.930648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.945447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.945515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.960848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.960917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.980673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.980726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.802 [2024-04-18 11:13:52.998849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.802 [2024-04-18 11:13:52.998920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.802 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:44.803 [2024-04-18 11:13:53.017094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.803 [2024-04-18 11:13:53.017362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.803 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.034403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.034661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.052704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.052792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.069178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.069257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.087299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.087391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.103808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.103895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.122251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.122341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.061 [2024-04-18 11:13:53.140419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.061 [2024-04-18 11:13:53.140503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.061 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.158321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.158388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.171857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.171930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.190938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.191023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.209549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.209631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.226853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.226933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.245006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.245073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.258365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.258421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.062 [2024-04-18 11:13:53.277497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.062 [2024-04-18 11:13:53.277568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.062 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.294441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.321 [2024-04-18 11:13:53.294503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.321 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.312472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.321 [2024-04-18 11:13:53.312549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.321 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.329363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.321 [2024-04-18 11:13:53.329443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.321 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.342872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.321 [2024-04-18 11:13:53.342962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.321 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.362695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.321 [2024-04-18 11:13:53.362775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.321 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.321 [2024-04-18 11:13:53.379651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.379719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.396270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.396341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.409643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.409706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.428479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.428553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.446303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.446376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.462787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.462858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.479720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.479785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.497326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.497393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.515160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.515233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.322 [2024-04-18 11:13:53.531685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.322 [2024-04-18 11:13:53.531753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.322 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.545066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.545152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.564315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.564385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.581721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.581798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.598811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.598888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.615649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.615718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.628407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.628474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.645039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.645125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.663059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.663155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.679946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.680020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.696709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.696772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.715997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.716065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.732862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.732943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.746673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.746745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.763589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.763666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.778697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.778768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.794523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.794599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.813691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.813783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.621 [2024-04-18 11:13:53.831033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.621 [2024-04-18 11:13:53.831136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.621 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.880 [2024-04-18 11:13:53.844461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.880 [2024-04-18 11:13:53.844531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.880 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.880 [2024-04-18 11:13:53.863476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.880 [2024-04-18 11:13:53.863566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.881021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.881150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.894530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.894601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.914000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.914090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.928737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.928807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.944273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.944347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.962587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.962661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.979872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.979963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:53.993373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:53.993439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:54.013430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:54.013735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:54.033048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:54.033314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:54.050937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:54.051039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:54.064652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:54.064725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:45.881 [2024-04-18 11:13:54.084041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.881 [2024-04-18 11:13:54.084151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.881 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.102383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.102463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.118933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.119008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.136812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.136891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.153948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.154034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.167443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.167506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.186008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.186077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.203982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.204065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.221705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.221774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.141 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.141 [2024-04-18 11:13:54.239508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.141 [2024-04-18 11:13:54.239573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.253502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.253572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.272064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.272146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.289688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.289753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.307374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.307440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.326317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.326387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.142 [2024-04-18 11:13:54.344471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.142 [2024-04-18 11:13:54.344541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.142 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.402 [2024-04-18 11:13:54.363526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.402 [2024-04-18 11:13:54.363599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.402 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.402 [2024-04-18 11:13:54.379866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.402 [2024-04-18 11:13:54.379940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.402 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.398209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.398276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.416022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.416089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.434124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.434184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.447598] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.447654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.465618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.465684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.482790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.482864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.500462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.500527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.518100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.518184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.536015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.536092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.550003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.550068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.568489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.568582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.586419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.586499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.604193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.604265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.403 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.403 [2024-04-18 11:13:54.621912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.403 [2024-04-18 11:13:54.621976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.638736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.638800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.656084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.656174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.673638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.673717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.691816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.691909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.710215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.710291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.727004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.727091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.744624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.744912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.663 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.663 [2024-04-18 11:13:54.762982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.663 [2024-04-18 11:13:54.763354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.781623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.781954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.799173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.799505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.817860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.817955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.831699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.831764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.850933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.851023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.664 [2024-04-18 11:13:54.869944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.664 [2024-04-18 11:13:54.870031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.664 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.888602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.888666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.907036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.907148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.923947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.924042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.942376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.942438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.959184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.959256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.976026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.976097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:54.992736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:54.992812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.004990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.005049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.021129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.021182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.038613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.038675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.057042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.057117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.074814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.074870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.088411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.088465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.107138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.107193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.124407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.923 [2024-04-18 11:13:55.124475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.923 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:46.923 [2024-04-18 11:13:55.141999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.924 [2024-04-18 11:13:55.142055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.157993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.158049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.174569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.174635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.187681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.187736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.207033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.207123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.222295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.222358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.239793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.184 [2024-04-18 11:13:55.239856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.184 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.184 [2024-04-18 11:13:55.256918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.256990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.270447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.270510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.288996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.289056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.306629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.306697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.325178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.325262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.338655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.338720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.358546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.358661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.373071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.373159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.185 [2024-04-18 11:13:55.390316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.185 [2024-04-18 11:13:55.390395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.185 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.408464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.408531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.421924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.421995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.441744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.441835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.460419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.460501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.474664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.474734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.492719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.492809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.511435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.511522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.529542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.529612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.543174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.543231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.562134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.562227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.579415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.579495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.597462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.597527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.615188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.615265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.632212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.632286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.645948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.646009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.444 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.444 [2024-04-18 11:13:55.662317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.444 [2024-04-18 11:13:55.662388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.680514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.680583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.696771] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.696828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.713246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.713309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.726480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.726536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.744392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.744450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.762509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.762568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.779171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.779230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.796796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.796852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.813045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.813121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.830943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.831013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.847183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.847240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.863307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.863361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.880527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.880585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.898710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.898767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.703 [2024-04-18 11:13:55.915093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.703 [2024-04-18 11:13:55.915165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.703 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.961 [2024-04-18 11:13:55.932638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.961 [2024-04-18 11:13:55.932704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.961 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.961 [2024-04-18 11:13:55.949018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:55.949087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:55.962295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:55.962381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:55.981469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:55.981539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:55.999273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:55.999339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.017181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.017256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.030693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.030765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.049661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.049730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.068083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.068163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.084792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.084869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.101931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.102006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.119012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.119083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.136535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.136613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.153222] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.153295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:47.962 [2024-04-18 11:13:56.170891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:47.962 [2024-04-18 11:13:56.170954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:47.962 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.187589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.187663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.205845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.205925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.224407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.224489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.242659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.242735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.259636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.259714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.276545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.276629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.290297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.290360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.308596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.308662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.327583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.327666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.346224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.346289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.363954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.364027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.382134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.382210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.395772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.395835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.414729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.414795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.220 [2024-04-18 11:13:56.432054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.220 [2024-04-18 11:13:56.432128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.220 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.450236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.450293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.466552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.466611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.482719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.482775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.500695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.500752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.518213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.518265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.535597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.535651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.552155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.479 [2024-04-18 11:13:56.552211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.479 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.479 [2024-04-18 11:13:56.569727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.569784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.587801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.587855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.605383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.605440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.622431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.622513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.641585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.641650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.658474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.658548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.670802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.670866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 00:22:48.480 Latency(us) 00:22:48.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.480 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:48.480 Nvme1n1 : 5.01 8458.52 66.08 0.00 0.00 15110.28 4140.68 23950.43 00:22:48.480 =================================================================================================================== 00:22:48.480 Total : 8458.52 66.08 0.00 0.00 15110.28 4140.68 23950.43 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.682996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.683057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.480 [2024-04-18 11:13:56.694968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.480 [2024-04-18 11:13:56.695030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.480 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.706996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.707059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.718978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.719041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.731012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.731072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.743031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.743096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.754984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.755042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.767012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.767068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.779012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.779068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.791009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.791069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.803026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.803083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.814982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.815029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.827043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.827125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.839063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.839134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.851025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.851084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.863064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.738 [2024-04-18 11:13:56.863136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.738 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.738 [2024-04-18 11:13:56.875057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.875127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.887070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.887140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.899075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.899146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.911064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.911139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.923064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.923138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.935126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.935190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.739 [2024-04-18 11:13:56.947044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.739 [2024-04-18 11:13:56.947127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.739 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:56.959090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:56.959167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:56.971123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:56.971186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:56.983080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:56.983160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:56.995126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:56.995189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.007085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.007160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.019122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.019347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.031166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.031227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.043072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.043150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.055085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.055143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.063064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.063126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.075070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.075122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.087075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.087137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.997 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.997 [2024-04-18 11:13:57.099052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.997 [2024-04-18 11:13:57.099099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.111154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.111211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.123174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.123234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.135091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.135164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.147101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.147158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.159125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.159167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.171134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.171178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.183127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.183169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.195087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.195137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:48.998 [2024-04-18 11:13:57.207125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:48.998 [2024-04-18 11:13:57.207167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:48.998 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.219166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.219208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.231095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.231149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.243149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.243191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.255147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.255190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.267176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.267231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.279217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.279280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.291166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.291212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.303202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.303249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.315207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.315253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.327229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.327286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.339239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.339291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.351233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.351284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.363226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.363300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.375255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.375312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.387242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.387298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.399279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.399340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.411278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.411340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.423265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.423328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.435309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.435373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.447303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.257 [2024-04-18 11:13:57.447375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.257 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.257 [2024-04-18 11:13:57.459307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.258 [2024-04-18 11:13:57.459364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.258 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.258 [2024-04-18 11:13:57.471288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.258 [2024-04-18 11:13:57.471344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.258 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.483278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.483350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.495317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.495378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.507338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.507397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.519299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.519361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.531364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.531431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.543332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.543398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.555319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.555383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.567353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.567416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.579329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.579388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.591348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.591403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.516 [2024-04-18 11:13:57.603335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.516 [2024-04-18 11:13:57.603385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.516 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.615321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.615375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.627396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.627465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.639385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.639456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.651360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.651415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.663386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.663444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.675371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.675450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.687409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.687478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.699402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.699478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.711344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.711396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.723357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.723409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.517 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.517 [2024-04-18 11:13:57.735354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.517 [2024-04-18 11:13:57.735402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.774 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.774 [2024-04-18 11:13:57.747363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.774 [2024-04-18 11:13:57.747410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.774 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.774 [2024-04-18 11:13:57.759362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.774 [2024-04-18 11:13:57.759411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 [2024-04-18 11:13:57.771348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.775 [2024-04-18 11:13:57.771393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 [2024-04-18 11:13:57.783381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.775 [2024-04-18 11:13:57.783428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 [2024-04-18 11:13:57.795399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.775 [2024-04-18 11:13:57.795448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 [2024-04-18 11:13:57.811382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.775 [2024-04-18 11:13:57.811620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 [2024-04-18 11:13:57.823417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:49.775 [2024-04-18 11:13:57.823698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:49.775 2024/04/18 11:13:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:49.775 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76607) - No such process 00:22:49.775 11:13:57 -- target/zcopy.sh@49 -- # wait 76607 00:22:49.775 11:13:57 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:49.775 11:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.775 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.775 11:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.775 11:13:57 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:49.775 11:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.775 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.775 delay0 00:22:49.775 11:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.775 11:13:57 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:49.775 11:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.775 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.775 11:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.775 11:13:57 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:50.032 [2024-04-18 11:13:58.085799] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:56.591 Initializing NVMe Controllers 00:22:56.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.591 Initialization complete. Launching workers. 00:22:56.591 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:22:56.591 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:22:56.591 success 188, unsuccess 178, failed 0 00:22:56.591 11:14:04 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:56.591 11:14:04 -- target/zcopy.sh@60 -- # nvmftestfini 00:22:56.591 11:14:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:56.591 11:14:04 -- nvmf/common.sh@117 -- # sync 00:22:56.591 11:14:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.591 11:14:04 -- nvmf/common.sh@120 -- # set +e 00:22:56.591 11:14:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.591 11:14:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.591 rmmod nvme_tcp 00:22:56.591 rmmod nvme_fabrics 00:22:56.591 rmmod nvme_keyring 00:22:56.591 11:14:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.591 11:14:04 -- nvmf/common.sh@124 -- # set -e 00:22:56.591 11:14:04 -- nvmf/common.sh@125 -- # return 0 00:22:56.591 11:14:04 -- nvmf/common.sh@478 -- # '[' -n 76422 ']' 00:22:56.591 11:14:04 -- nvmf/common.sh@479 -- # killprocess 76422 00:22:56.591 11:14:04 -- common/autotest_common.sh@936 -- # '[' -z 76422 ']' 00:22:56.591 11:14:04 -- common/autotest_common.sh@940 -- # kill -0 76422 00:22:56.591 11:14:04 -- common/autotest_common.sh@941 -- # uname 00:22:56.591 11:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.591 11:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76422 00:22:56.591 11:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.591 killing process with pid 76422 00:22:56.591 11:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.591 11:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76422' 00:22:56.591 11:14:04 -- common/autotest_common.sh@955 -- # kill 76422 00:22:56.591 11:14:04 -- common/autotest_common.sh@960 -- # wait 76422 00:22:57.526 11:14:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:57.526 11:14:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:57.526 11:14:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:57.526 11:14:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.526 11:14:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.526 11:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.526 11:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.526 11:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.526 11:14:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:57.526 00:22:57.526 real 0m28.720s 00:22:57.526 user 0m47.482s 00:22:57.526 sys 0m6.648s 00:22:57.526 11:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:57.526 ************************************ 00:22:57.526 END TEST nvmf_zcopy 00:22:57.526 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:57.526 ************************************ 00:22:57.526 11:14:05 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:57.526 11:14:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:57.526 11:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:57.526 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:57.526 ************************************ 00:22:57.526 START TEST nvmf_nmic 00:22:57.526 ************************************ 00:22:57.526 11:14:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:57.784 * Looking for test storage... 00:22:57.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:57.784 11:14:05 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:57.784 11:14:05 -- nvmf/common.sh@7 -- # uname -s 00:22:57.784 11:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.784 11:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.784 11:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.784 11:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.784 11:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.784 11:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.784 11:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.784 11:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.784 11:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.784 11:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:57.784 11:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:22:57.784 11:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.784 11:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.784 11:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:57.784 11:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.784 11:14:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:57.784 11:14:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.784 11:14:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.784 11:14:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.784 11:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.784 11:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.784 11:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.784 11:14:05 -- paths/export.sh@5 -- # export PATH 00:22:57.784 11:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.784 11:14:05 -- nvmf/common.sh@47 -- # : 0 00:22:57.784 11:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:57.784 11:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:57.784 11:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.784 11:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.784 11:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.784 11:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:57.784 11:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:57.784 11:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:57.784 11:14:05 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.784 11:14:05 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.784 11:14:05 -- target/nmic.sh@14 -- # nvmftestinit 00:22:57.784 11:14:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:57.784 11:14:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.784 11:14:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:57.784 11:14:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:57.784 11:14:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:57.784 11:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.784 11:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.784 11:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.784 11:14:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:57.784 11:14:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:57.784 11:14:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.784 11:14:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.784 11:14:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:57.784 11:14:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:57.784 11:14:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:57.784 11:14:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:57.784 11:14:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:57.784 11:14:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.784 11:14:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:57.784 11:14:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:57.784 11:14:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:57.784 11:14:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:57.784 11:14:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:57.784 11:14:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:57.784 Cannot find device "nvmf_tgt_br" 00:22:57.784 11:14:05 -- nvmf/common.sh@155 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:57.784 Cannot find device "nvmf_tgt_br2" 00:22:57.784 11:14:05 -- nvmf/common.sh@156 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:57.784 11:14:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:57.784 Cannot find device "nvmf_tgt_br" 00:22:57.784 11:14:05 -- nvmf/common.sh@158 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:57.784 Cannot find device "nvmf_tgt_br2" 00:22:57.784 11:14:05 -- nvmf/common.sh@159 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:57.784 11:14:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:57.784 11:14:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:57.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.784 11:14:05 -- nvmf/common.sh@162 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:57.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.784 11:14:05 -- nvmf/common.sh@163 -- # true 00:22:57.784 11:14:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:57.784 11:14:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:57.784 11:14:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:57.784 11:14:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:57.784 11:14:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:57.784 11:14:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.043 11:14:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.043 11:14:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.043 11:14:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:58.043 11:14:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:58.043 11:14:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:58.043 11:14:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:58.043 11:14:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:58.043 11:14:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.043 11:14:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.043 11:14:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.043 11:14:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:58.043 11:14:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:58.043 11:14:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.043 11:14:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.043 11:14:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.043 11:14:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.043 11:14:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.043 11:14:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:58.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:22:58.043 00:22:58.043 --- 10.0.0.2 ping statistics --- 00:22:58.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.043 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:58.043 11:14:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:58.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:22:58.043 00:22:58.043 --- 10.0.0.3 ping statistics --- 00:22:58.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.043 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:58.043 11:14:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:58.043 00:22:58.043 --- 10.0.0.1 ping statistics --- 00:22:58.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.043 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:58.043 11:14:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.043 11:14:06 -- nvmf/common.sh@422 -- # return 0 00:22:58.043 11:14:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:58.043 11:14:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.043 11:14:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:58.043 11:14:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:58.043 11:14:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.043 11:14:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:58.043 11:14:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:58.043 11:14:06 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:22:58.043 11:14:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:58.043 11:14:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:58.043 11:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:58.043 11:14:06 -- nvmf/common.sh@470 -- # nvmfpid=76956 00:22:58.043 11:14:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:58.043 11:14:06 -- nvmf/common.sh@471 -- # waitforlisten 76956 00:22:58.043 11:14:06 -- common/autotest_common.sh@817 -- # '[' -z 76956 ']' 00:22:58.043 11:14:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.043 11:14:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.043 11:14:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.043 11:14:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.043 11:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:58.303 [2024-04-18 11:14:06.304174] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:58.303 [2024-04-18 11:14:06.304325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.303 [2024-04-18 11:14:06.469307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.564 [2024-04-18 11:14:06.718341] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.564 [2024-04-18 11:14:06.718426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.564 [2024-04-18 11:14:06.718448] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.564 [2024-04-18 11:14:06.718462] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.564 [2024-04-18 11:14:06.718477] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.564 [2024-04-18 11:14:06.718641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.564 [2024-04-18 11:14:06.719571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.564 [2024-04-18 11:14:06.719640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.564 [2024-04-18 11:14:06.719647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.130 11:14:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.130 11:14:07 -- common/autotest_common.sh@850 -- # return 0 00:22:59.130 11:14:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:59.130 11:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 11:14:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.130 11:14:07 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.130 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 [2024-04-18 11:14:07.232977] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.130 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.130 11:14:07 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:59.130 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 Malloc0 00:22:59.130 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.130 11:14:07 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:59.130 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.130 11:14:07 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.130 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.130 11:14:07 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.130 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.130 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.388 [2024-04-18 11:14:07.353676] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.388 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.388 test case1: single bdev can't be used in multiple subsystems 00:22:59.388 11:14:07 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:22:59.388 11:14:07 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:59.388 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.388 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.388 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.388 11:14:07 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:59.388 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.388 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.388 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.388 11:14:07 -- target/nmic.sh@28 -- # nmic_status=0 00:22:59.388 11:14:07 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:22:59.388 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.388 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.388 [2024-04-18 11:14:07.377522] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:22:59.388 [2024-04-18 11:14:07.377594] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:22:59.388 [2024-04-18 11:14:07.377616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.388 2024/04/18 11:14:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:59.388 request: 00:22:59.388 { 00:22:59.388 "method": "nvmf_subsystem_add_ns", 00:22:59.388 "params": { 00:22:59.388 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.388 "namespace": { 00:22:59.388 "bdev_name": "Malloc0", 00:22:59.388 "no_auto_visible": false 00:22:59.388 } 00:22:59.388 } 00:22:59.388 } 00:22:59.388 Got JSON-RPC error response 00:22:59.388 GoRPCClient: error on JSON-RPC call 00:22:59.388 Adding namespace failed - expected result. 00:22:59.388 test case2: host connect to nvmf target in multiple paths 00:22:59.388 11:14:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:59.388 11:14:07 -- target/nmic.sh@29 -- # nmic_status=1 00:22:59.388 11:14:07 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:22:59.388 11:14:07 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:22:59.388 11:14:07 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:22:59.388 11:14:07 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.388 11:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.388 11:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.388 [2024-04-18 11:14:07.389714] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.388 11:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.388 11:14:07 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:59.388 11:14:07 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:22:59.646 11:14:07 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:22:59.646 11:14:07 -- common/autotest_common.sh@1184 -- # local i=0 00:22:59.646 11:14:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.646 11:14:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:59.646 11:14:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:01.566 11:14:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:01.567 11:14:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:01.567 11:14:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:01.567 11:14:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:01.567 11:14:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:01.567 11:14:09 -- common/autotest_common.sh@1194 -- # return 0 00:23:01.567 11:14:09 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:01.567 [global] 00:23:01.567 thread=1 00:23:01.567 invalidate=1 00:23:01.567 rw=write 00:23:01.567 time_based=1 00:23:01.567 runtime=1 00:23:01.567 ioengine=libaio 00:23:01.567 direct=1 00:23:01.567 bs=4096 00:23:01.567 iodepth=1 00:23:01.567 norandommap=0 00:23:01.567 numjobs=1 00:23:01.567 00:23:01.567 verify_dump=1 00:23:01.567 verify_backlog=512 00:23:01.567 verify_state_save=0 00:23:01.567 do_verify=1 00:23:01.567 verify=crc32c-intel 00:23:01.567 [job0] 00:23:01.567 filename=/dev/nvme0n1 00:23:01.825 Could not set queue depth (nvme0n1) 00:23:01.825 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:01.825 fio-3.35 00:23:01.825 Starting 1 thread 00:23:03.200 00:23:03.200 job0: (groupid=0, jobs=1): err= 0: pid=77066: Thu Apr 18 11:14:11 2024 00:23:03.200 read: IOPS=2437, BW=9750KiB/s (9984kB/s)(9760KiB/1001msec) 00:23:03.200 slat (nsec): min=14317, max=50048, avg=16230.48, stdev=3990.36 00:23:03.200 clat (usec): min=182, max=333, avg=203.14, stdev=12.59 00:23:03.200 lat (usec): min=197, max=348, avg=219.37, stdev=13.92 00:23:03.200 clat percentiles (usec): 00:23:03.200 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 194], 00:23:03.200 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:23:03.200 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:23:03.200 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 306], 99.95th=[ 322], 00:23:03.200 | 99.99th=[ 334] 00:23:03.200 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:23:03.200 slat (usec): min=19, max=146, avg=26.46, stdev= 9.76 00:23:03.200 clat (usec): min=132, max=1051, avg=151.45, stdev=22.16 00:23:03.201 lat (usec): min=153, max=1073, avg=177.90, stdev=26.00 00:23:03.201 clat percentiles (usec): 00:23:03.201 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:23:03.201 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:23:03.201 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:23:03.201 | 99.00th=[ 186], 99.50th=[ 202], 99.90th=[ 351], 99.95th=[ 465], 00:23:03.201 | 99.99th=[ 1057] 00:23:03.201 bw ( KiB/s): min=11800, max=11800, per=100.00%, avg=11800.00, stdev= 0.00, samples=1 00:23:03.201 iops : min= 2950, max= 2950, avg=2950.00, stdev= 0.00, samples=1 00:23:03.201 lat (usec) : 250=99.58%, 500=0.40% 00:23:03.201 lat (msec) : 2=0.02% 00:23:03.201 cpu : usr=2.70%, sys=7.20%, ctx=5000, majf=0, minf=2 00:23:03.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:03.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.201 issued rwts: total=2440,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:03.201 00:23:03.201 Run status group 0 (all jobs): 00:23:03.201 READ: bw=9750KiB/s (9984kB/s), 9750KiB/s-9750KiB/s (9984kB/s-9984kB/s), io=9760KiB (9994kB), run=1001-1001msec 00:23:03.201 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:23:03.201 00:23:03.201 Disk stats (read/write): 00:23:03.201 nvme0n1: ios=2097/2464, merge=0/0, ticks=446/389, in_queue=835, util=91.07% 00:23:03.201 11:14:11 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:03.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:03.201 11:14:11 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:03.201 11:14:11 -- common/autotest_common.sh@1205 -- # local i=0 00:23:03.201 11:14:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:03.201 11:14:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.201 11:14:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:03.201 11:14:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.201 11:14:11 -- common/autotest_common.sh@1217 -- # return 0 00:23:03.201 11:14:11 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:03.201 11:14:11 -- target/nmic.sh@53 -- # nvmftestfini 00:23:03.201 11:14:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:03.201 11:14:11 -- nvmf/common.sh@117 -- # sync 00:23:03.201 11:14:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.201 11:14:11 -- nvmf/common.sh@120 -- # set +e 00:23:03.201 11:14:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.201 11:14:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.201 rmmod nvme_tcp 00:23:03.201 rmmod nvme_fabrics 00:23:03.201 rmmod nvme_keyring 00:23:03.201 11:14:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.201 11:14:11 -- nvmf/common.sh@124 -- # set -e 00:23:03.201 11:14:11 -- nvmf/common.sh@125 -- # return 0 00:23:03.201 11:14:11 -- nvmf/common.sh@478 -- # '[' -n 76956 ']' 00:23:03.201 11:14:11 -- nvmf/common.sh@479 -- # killprocess 76956 00:23:03.201 11:14:11 -- common/autotest_common.sh@936 -- # '[' -z 76956 ']' 00:23:03.201 11:14:11 -- common/autotest_common.sh@940 -- # kill -0 76956 00:23:03.201 11:14:11 -- common/autotest_common.sh@941 -- # uname 00:23:03.201 11:14:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.201 11:14:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76956 00:23:03.201 killing process with pid 76956 00:23:03.201 11:14:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.201 11:14:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.201 11:14:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76956' 00:23:03.201 11:14:11 -- common/autotest_common.sh@955 -- # kill 76956 00:23:03.201 11:14:11 -- common/autotest_common.sh@960 -- # wait 76956 00:23:04.607 11:14:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:04.607 11:14:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:04.607 11:14:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:04.607 11:14:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.607 11:14:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.607 11:14:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.607 11:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.607 11:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.607 11:14:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.607 ************************************ 00:23:04.607 END TEST nvmf_nmic 00:23:04.607 ************************************ 00:23:04.607 00:23:04.607 real 0m7.066s 00:23:04.607 user 0m22.423s 00:23:04.607 sys 0m1.432s 00:23:04.607 11:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:04.607 11:14:12 -- common/autotest_common.sh@10 -- # set +x 00:23:04.607 11:14:12 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:04.607 11:14:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:04.607 11:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.607 11:14:12 -- common/autotest_common.sh@10 -- # set +x 00:23:04.866 ************************************ 00:23:04.866 START TEST nvmf_fio_target 00:23:04.866 ************************************ 00:23:04.866 11:14:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:04.866 * Looking for test storage... 00:23:04.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:04.866 11:14:12 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.866 11:14:12 -- nvmf/common.sh@7 -- # uname -s 00:23:04.866 11:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.866 11:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.866 11:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.866 11:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.866 11:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.866 11:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.866 11:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.866 11:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.866 11:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.866 11:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:04.866 11:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:04.866 11:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.866 11:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.866 11:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.866 11:14:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.866 11:14:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.866 11:14:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.866 11:14:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.866 11:14:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.866 11:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.866 11:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.866 11:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.866 11:14:12 -- paths/export.sh@5 -- # export PATH 00:23:04.866 11:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.866 11:14:12 -- nvmf/common.sh@47 -- # : 0 00:23:04.866 11:14:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.866 11:14:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.866 11:14:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.866 11:14:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.866 11:14:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.866 11:14:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.866 11:14:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.866 11:14:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.866 11:14:12 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:04.866 11:14:12 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:04.866 11:14:12 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.866 11:14:12 -- target/fio.sh@16 -- # nvmftestinit 00:23:04.866 11:14:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:04.866 11:14:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.866 11:14:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:04.866 11:14:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:04.866 11:14:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:04.866 11:14:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.866 11:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.866 11:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.866 11:14:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:04.866 11:14:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:04.866 11:14:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.866 11:14:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.866 11:14:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.866 11:14:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.866 11:14:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.866 11:14:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.866 11:14:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.866 11:14:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.866 11:14:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.866 11:14:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.866 11:14:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.866 11:14:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.866 11:14:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.866 11:14:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.866 Cannot find device "nvmf_tgt_br" 00:23:04.866 11:14:13 -- nvmf/common.sh@155 -- # true 00:23:04.866 11:14:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.866 Cannot find device "nvmf_tgt_br2" 00:23:04.866 11:14:13 -- nvmf/common.sh@156 -- # true 00:23:04.866 11:14:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.866 11:14:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.866 Cannot find device "nvmf_tgt_br" 00:23:04.866 11:14:13 -- nvmf/common.sh@158 -- # true 00:23:04.866 11:14:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.866 Cannot find device "nvmf_tgt_br2" 00:23:04.866 11:14:13 -- nvmf/common.sh@159 -- # true 00:23:04.866 11:14:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.124 11:14:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.124 11:14:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.124 11:14:13 -- nvmf/common.sh@162 -- # true 00:23:05.124 11:14:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.124 11:14:13 -- nvmf/common.sh@163 -- # true 00:23:05.124 11:14:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.124 11:14:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.124 11:14:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.124 11:14:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.124 11:14:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.124 11:14:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.124 11:14:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.124 11:14:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.124 11:14:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.124 11:14:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.124 11:14:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.124 11:14:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.124 11:14:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.124 11:14:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.124 11:14:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.124 11:14:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.124 11:14:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.124 11:14:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.124 11:14:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.124 11:14:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.124 11:14:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.124 11:14:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.124 11:14:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.382 11:14:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:23:05.382 00:23:05.382 --- 10.0.0.2 ping statistics --- 00:23:05.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.382 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:05.382 11:14:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.382 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.382 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:05.382 00:23:05.382 --- 10.0.0.3 ping statistics --- 00:23:05.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.382 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:05.382 11:14:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:05.382 00:23:05.382 --- 10.0.0.1 ping statistics --- 00:23:05.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.382 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:05.382 11:14:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.382 11:14:13 -- nvmf/common.sh@422 -- # return 0 00:23:05.382 11:14:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:05.382 11:14:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.382 11:14:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:05.382 11:14:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:05.382 11:14:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.382 11:14:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:05.382 11:14:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:05.382 11:14:13 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:05.382 11:14:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.382 11:14:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.382 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:23:05.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.382 11:14:13 -- nvmf/common.sh@470 -- # nvmfpid=77266 00:23:05.382 11:14:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:05.382 11:14:13 -- nvmf/common.sh@471 -- # waitforlisten 77266 00:23:05.382 11:14:13 -- common/autotest_common.sh@817 -- # '[' -z 77266 ']' 00:23:05.382 11:14:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.382 11:14:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.382 11:14:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.382 11:14:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.382 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:23:05.382 [2024-04-18 11:14:13.483074] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:05.382 [2024-04-18 11:14:13.483463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.641 [2024-04-18 11:14:13.650869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.900 [2024-04-18 11:14:13.898966] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.900 [2024-04-18 11:14:13.899382] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.900 [2024-04-18 11:14:13.899557] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.900 [2024-04-18 11:14:13.899858] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.900 [2024-04-18 11:14:13.900003] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.900 [2024-04-18 11:14:13.900188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.900 [2024-04-18 11:14:13.900730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.900 [2024-04-18 11:14:13.900910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.900 [2024-04-18 11:14:13.900933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.467 11:14:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.467 11:14:14 -- common/autotest_common.sh@850 -- # return 0 00:23:06.467 11:14:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.467 11:14:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.467 11:14:14 -- common/autotest_common.sh@10 -- # set +x 00:23:06.467 11:14:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.467 11:14:14 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:06.725 [2024-04-18 11:14:14.736744] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.725 11:14:14 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:06.983 11:14:15 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:06.983 11:14:15 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:07.241 11:14:15 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:07.241 11:14:15 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:07.807 11:14:15 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:07.807 11:14:15 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:08.065 11:14:16 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:08.065 11:14:16 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:08.323 11:14:16 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:08.581 11:14:16 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:08.581 11:14:16 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:08.839 11:14:17 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:08.839 11:14:17 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:09.405 11:14:17 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:09.405 11:14:17 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:09.663 11:14:17 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:09.922 11:14:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:09.922 11:14:17 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.180 11:14:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:10.180 11:14:18 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:10.438 11:14:18 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.696 [2024-04-18 11:14:18.696564] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.696 11:14:18 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:10.954 11:14:18 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:11.212 11:14:19 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:11.471 11:14:19 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:11.471 11:14:19 -- common/autotest_common.sh@1184 -- # local i=0 00:23:11.471 11:14:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:11.471 11:14:19 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:23:11.471 11:14:19 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:23:11.471 11:14:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:13.374 11:14:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:13.374 11:14:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:13.374 11:14:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:13.374 11:14:21 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:23:13.374 11:14:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:13.374 11:14:21 -- common/autotest_common.sh@1194 -- # return 0 00:23:13.374 11:14:21 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:13.374 [global] 00:23:13.374 thread=1 00:23:13.374 invalidate=1 00:23:13.374 rw=write 00:23:13.374 time_based=1 00:23:13.374 runtime=1 00:23:13.374 ioengine=libaio 00:23:13.374 direct=1 00:23:13.374 bs=4096 00:23:13.374 iodepth=1 00:23:13.374 norandommap=0 00:23:13.374 numjobs=1 00:23:13.374 00:23:13.374 verify_dump=1 00:23:13.374 verify_backlog=512 00:23:13.374 verify_state_save=0 00:23:13.374 do_verify=1 00:23:13.374 verify=crc32c-intel 00:23:13.374 [job0] 00:23:13.374 filename=/dev/nvme0n1 00:23:13.374 [job1] 00:23:13.374 filename=/dev/nvme0n2 00:23:13.374 [job2] 00:23:13.374 filename=/dev/nvme0n3 00:23:13.374 [job3] 00:23:13.374 filename=/dev/nvme0n4 00:23:13.632 Could not set queue depth (nvme0n1) 00:23:13.632 Could not set queue depth (nvme0n2) 00:23:13.632 Could not set queue depth (nvme0n3) 00:23:13.632 Could not set queue depth (nvme0n4) 00:23:13.632 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:13.632 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:13.632 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:13.632 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:13.632 fio-3.35 00:23:13.632 Starting 4 threads 00:23:15.037 00:23:15.037 job0: (groupid=0, jobs=1): err= 0: pid=77570: Thu Apr 18 11:14:22 2024 00:23:15.037 read: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec) 00:23:15.037 slat (nsec): min=10737, max=37483, avg=14398.11, stdev=2022.72 00:23:15.037 clat (usec): min=196, max=1887, avg=346.67, stdev=43.14 00:23:15.037 lat (usec): min=218, max=1901, avg=361.07, stdev=43.10 00:23:15.037 clat percentiles (usec): 00:23:15.037 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 334], 00:23:15.037 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 347], 00:23:15.037 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 367], 00:23:15.037 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 562], 99.95th=[ 1893], 00:23:15.037 | 99.99th=[ 1893] 00:23:15.037 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:15.037 slat (nsec): min=10973, max=60067, avg=21747.80, stdev=4748.38 00:23:15.037 clat (usec): min=143, max=409, avg=272.40, stdev=28.24 00:23:15.037 lat (usec): min=164, max=423, avg=294.15, stdev=28.08 00:23:15.037 clat percentiles (usec): 00:23:15.037 | 1.00th=[ 159], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:23:15.037 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:23:15.037 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:23:15.037 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 392], 99.95th=[ 408], 00:23:15.037 | 99.99th=[ 408] 00:23:15.037 bw ( KiB/s): min= 8192, max= 8192, per=33.37%, avg=8192.00, stdev= 0.00, samples=1 00:23:15.037 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:15.037 lat (usec) : 250=6.41%, 500=93.52%, 750=0.03% 00:23:15.037 lat (msec) : 2=0.03% 00:23:15.037 cpu : usr=1.30%, sys=4.10%, ctx=3042, majf=0, minf=7 00:23:15.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:15.038 job1: (groupid=0, jobs=1): err= 0: pid=77571: Thu Apr 18 11:14:22 2024 00:23:15.038 read: IOPS=1502, BW=6010KiB/s (6154kB/s)(6016KiB/1001msec) 00:23:15.038 slat (nsec): min=11196, max=38325, avg=13228.29, stdev=2175.09 00:23:15.038 clat (usec): min=217, max=1760, avg=347.94, stdev=40.43 00:23:15.038 lat (usec): min=229, max=1773, avg=361.17, stdev=40.44 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:23:15.038 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:23:15.038 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 371], 00:23:15.038 | 99.00th=[ 383], 99.50th=[ 453], 99.90th=[ 562], 99.95th=[ 1762], 00:23:15.038 | 99.99th=[ 1762] 00:23:15.038 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:15.038 slat (usec): min=10, max=138, avg=21.78, stdev= 5.70 00:23:15.038 clat (usec): min=140, max=411, avg=272.30, stdev=33.47 00:23:15.038 lat (usec): min=160, max=492, avg=294.07, stdev=33.28 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 147], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 262], 00:23:15.038 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:23:15.038 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 314], 00:23:15.038 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 412], 00:23:15.038 | 99.99th=[ 412] 00:23:15.038 bw ( KiB/s): min= 8192, max= 8192, per=33.37%, avg=8192.00, stdev= 0.00, samples=1 00:23:15.038 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:15.038 lat (usec) : 250=6.18%, 500=93.75%, 750=0.03% 00:23:15.038 lat (msec) : 2=0.03% 00:23:15.038 cpu : usr=1.50%, sys=3.90%, ctx=3041, majf=0, minf=12 00:23:15.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 issued rwts: total=1504,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:15.038 job2: (groupid=0, jobs=1): err= 0: pid=77572: Thu Apr 18 11:14:22 2024 00:23:15.038 read: IOPS=1246, BW=4987KiB/s (5107kB/s)(4992KiB/1001msec) 00:23:15.038 slat (nsec): min=16211, max=56286, avg=22483.11, stdev=5289.67 00:23:15.038 clat (usec): min=199, max=721, avg=376.32, stdev=28.82 00:23:15.038 lat (usec): min=215, max=739, avg=398.80, stdev=29.00 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:23:15.038 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:23:15.038 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 408], 00:23:15.038 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 725], 00:23:15.038 | 99.99th=[ 725] 00:23:15.038 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:15.038 slat (usec): min=27, max=131, avg=41.73, stdev= 7.60 00:23:15.038 clat (usec): min=151, max=456, avg=280.49, stdev=24.45 00:23:15.038 lat (usec): min=187, max=588, avg=322.22, stdev=23.46 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 184], 5.00th=[ 249], 10.00th=[ 265], 20.00th=[ 269], 00:23:15.038 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:23:15.038 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:23:15.038 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 420], 99.95th=[ 457], 00:23:15.038 | 99.99th=[ 457] 00:23:15.038 bw ( KiB/s): min= 7256, max= 7256, per=29.55%, avg=7256.00, stdev= 0.00, samples=1 00:23:15.038 iops : min= 1814, max= 1814, avg=1814.00, stdev= 0.00, samples=1 00:23:15.038 lat (usec) : 250=3.20%, 500=96.26%, 750=0.54% 00:23:15.038 cpu : usr=1.70%, sys=6.90%, ctx=2785, majf=0, minf=9 00:23:15.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 issued rwts: total=1248,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:15.038 job3: (groupid=0, jobs=1): err= 0: pid=77573: Thu Apr 18 11:14:22 2024 00:23:15.038 read: IOPS=1241, BW=4967KiB/s (5086kB/s)(4972KiB/1001msec) 00:23:15.038 slat (nsec): min=17327, max=60541, avg=27763.34, stdev=5171.48 00:23:15.038 clat (usec): min=227, max=1180, avg=368.21, stdev=39.06 00:23:15.038 lat (usec): min=250, max=1212, avg=395.97, stdev=38.65 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 281], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:23:15.038 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:23:15.038 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 392], 95.00th=[ 400], 00:23:15.038 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 1004], 99.95th=[ 1188], 00:23:15.038 | 99.99th=[ 1188] 00:23:15.038 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:15.038 slat (usec): min=26, max=139, avg=40.55, stdev= 6.66 00:23:15.038 clat (usec): min=175, max=2617, avg=285.04, stdev=64.43 00:23:15.038 lat (usec): min=211, max=2653, avg=325.58, stdev=64.29 00:23:15.038 clat percentiles (usec): 00:23:15.038 | 1.00th=[ 233], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 273], 00:23:15.038 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:23:15.038 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:23:15.038 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 709], 99.95th=[ 2606], 00:23:15.038 | 99.99th=[ 2606] 00:23:15.038 bw ( KiB/s): min= 7264, max= 7264, per=29.59%, avg=7264.00, stdev= 0.00, samples=1 00:23:15.038 iops : min= 1816, max= 1816, avg=1816.00, stdev= 0.00, samples=1 00:23:15.038 lat (usec) : 250=2.09%, 500=97.66%, 750=0.11%, 1000=0.04% 00:23:15.038 lat (msec) : 2=0.07%, 4=0.04% 00:23:15.038 cpu : usr=1.40%, sys=7.70%, ctx=2780, majf=0, minf=7 00:23:15.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.038 issued rwts: total=1243,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:15.038 00:23:15.038 Run status group 0 (all jobs): 00:23:15.038 READ: bw=21.5MiB/s (22.5MB/s), 4967KiB/s-6014KiB/s (5086kB/s-6158kB/s), io=21.5MiB (22.5MB), run=1001-1001msec 00:23:15.038 WRITE: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:23:15.038 00:23:15.038 Disk stats (read/write): 00:23:15.038 nvme0n1: ios=1178/1536, merge=0/0, ticks=444/434, in_queue=878, util=89.48% 00:23:15.038 nvme0n2: ios=1174/1536, merge=0/0, ticks=453/419, in_queue=872, util=90.89% 00:23:15.038 nvme0n3: ios=1056/1392, merge=0/0, ticks=464/422, in_queue=886, util=91.12% 00:23:15.038 nvme0n4: ios=1045/1391, merge=0/0, ticks=446/413, in_queue=859, util=90.86% 00:23:15.038 11:14:22 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:15.038 [global] 00:23:15.038 thread=1 00:23:15.038 invalidate=1 00:23:15.038 rw=randwrite 00:23:15.038 time_based=1 00:23:15.038 runtime=1 00:23:15.038 ioengine=libaio 00:23:15.038 direct=1 00:23:15.038 bs=4096 00:23:15.038 iodepth=1 00:23:15.038 norandommap=0 00:23:15.038 numjobs=1 00:23:15.038 00:23:15.038 verify_dump=1 00:23:15.038 verify_backlog=512 00:23:15.038 verify_state_save=0 00:23:15.038 do_verify=1 00:23:15.038 verify=crc32c-intel 00:23:15.038 [job0] 00:23:15.038 filename=/dev/nvme0n1 00:23:15.038 [job1] 00:23:15.038 filename=/dev/nvme0n2 00:23:15.038 [job2] 00:23:15.038 filename=/dev/nvme0n3 00:23:15.038 [job3] 00:23:15.038 filename=/dev/nvme0n4 00:23:15.038 Could not set queue depth (nvme0n1) 00:23:15.038 Could not set queue depth (nvme0n2) 00:23:15.038 Could not set queue depth (nvme0n3) 00:23:15.038 Could not set queue depth (nvme0n4) 00:23:15.038 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:15.038 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:15.038 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:15.038 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:15.038 fio-3.35 00:23:15.038 Starting 4 threads 00:23:16.411 00:23:16.411 job0: (groupid=0, jobs=1): err= 0: pid=77626: Thu Apr 18 11:14:24 2024 00:23:16.411 read: IOPS=1946, BW=7784KiB/s (7971kB/s)(7792KiB/1001msec) 00:23:16.411 slat (nsec): min=11371, max=61147, avg=17486.61, stdev=4005.10 00:23:16.411 clat (usec): min=185, max=1945, avg=286.24, stdev=161.88 00:23:16.411 lat (usec): min=204, max=1963, avg=303.72, stdev=163.39 00:23:16.411 clat percentiles (usec): 00:23:16.411 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:23:16.411 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:23:16.411 | 70.00th=[ 221], 80.00th=[ 449], 90.00th=[ 553], 95.00th=[ 685], 00:23:16.411 | 99.00th=[ 742], 99.50th=[ 775], 99.90th=[ 824], 99.95th=[ 1942], 00:23:16.411 | 99.99th=[ 1942] 00:23:16.411 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:23:16.411 slat (usec): min=16, max=123, avg=25.12, stdev= 5.12 00:23:16.411 clat (usec): min=137, max=7606, avg=170.59, stdev=191.75 00:23:16.411 lat (usec): min=160, max=7626, avg=195.71, stdev=191.70 00:23:16.411 clat percentiles (usec): 00:23:16.411 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:23:16.411 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:23:16.411 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:23:16.411 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 2212], 99.95th=[ 3752], 00:23:16.411 | 99.99th=[ 7635] 00:23:16.411 bw ( KiB/s): min=11352, max=11352, per=36.99%, avg=11352.00, stdev= 0.00, samples=1 00:23:16.411 iops : min= 2838, max= 2838, avg=2838.00, stdev= 0.00, samples=1 00:23:16.411 lat (usec) : 250=88.54%, 500=4.13%, 750=6.78%, 1000=0.45% 00:23:16.411 lat (msec) : 2=0.03%, 4=0.05%, 10=0.03% 00:23:16.411 cpu : usr=1.60%, sys=6.60%, ctx=3998, majf=0, minf=12 00:23:16.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.411 issued rwts: total=1948,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:16.411 job1: (groupid=0, jobs=1): err= 0: pid=77627: Thu Apr 18 11:14:24 2024 00:23:16.411 read: IOPS=1161, BW=4647KiB/s (4759kB/s)(4652KiB/1001msec) 00:23:16.411 slat (nsec): min=11027, max=58595, avg=15970.95, stdev=4463.93 00:23:16.411 clat (usec): min=221, max=637, avg=390.83, stdev=46.05 00:23:16.411 lat (usec): min=234, max=669, avg=406.81, stdev=46.97 00:23:16.411 clat percentiles (usec): 00:23:16.411 | 1.00th=[ 243], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 367], 00:23:16.411 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:23:16.411 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 490], 00:23:16.411 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 611], 99.95th=[ 635], 00:23:16.411 | 99.99th=[ 635] 00:23:16.411 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:16.411 slat (usec): min=11, max=176, avg=24.63, stdev= 8.59 00:23:16.411 clat (usec): min=61, max=539, avg=315.09, stdev=42.29 00:23:16.411 lat (usec): min=185, max=563, avg=339.72, stdev=41.70 00:23:16.411 clat percentiles (usec): 00:23:16.411 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:23:16.411 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:23:16.411 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 400], 00:23:16.411 | 99.00th=[ 445], 99.50th=[ 482], 99.90th=[ 502], 99.95th=[ 537], 00:23:16.411 | 99.99th=[ 537] 00:23:16.411 bw ( KiB/s): min= 7232, max= 7232, per=23.57%, avg=7232.00, stdev= 0.00, samples=1 00:23:16.411 iops : min= 1808, max= 1808, avg=1808.00, stdev= 0.00, samples=1 00:23:16.411 lat (usec) : 100=0.04%, 250=1.63%, 500=96.67%, 750=1.67% 00:23:16.411 cpu : usr=1.10%, sys=4.40%, ctx=2711, majf=0, minf=11 00:23:16.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.411 issued rwts: total=1163,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:16.411 job2: (groupid=0, jobs=1): err= 0: pid=77628: Thu Apr 18 11:14:24 2024 00:23:16.412 read: IOPS=2105, BW=8420KiB/s (8622kB/s)(8420KiB/1000msec) 00:23:16.412 slat (nsec): min=12747, max=52950, avg=15969.25, stdev=4361.79 00:23:16.412 clat (usec): min=196, max=526, avg=219.92, stdev=14.44 00:23:16.412 lat (usec): min=211, max=542, avg=235.89, stdev=15.45 00:23:16.412 clat percentiles (usec): 00:23:16.412 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 210], 00:23:16.412 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:23:16.412 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 239], 00:23:16.412 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 383], 99.95th=[ 420], 00:23:16.412 | 99.99th=[ 529] 00:23:16.412 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:23:16.412 slat (usec): min=18, max=123, avg=23.51, stdev= 7.53 00:23:16.412 clat (usec): min=149, max=456, avg=169.99, stdev=13.26 00:23:16.412 lat (usec): min=169, max=477, avg=193.50, stdev=16.70 00:23:16.412 clat percentiles (usec): 00:23:16.412 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:23:16.412 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:23:16.412 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:23:16.412 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 334], 99.95th=[ 371], 00:23:16.412 | 99.99th=[ 457] 00:23:16.412 bw ( KiB/s): min=10416, max=10416, per=33.94%, avg=10416.00, stdev= 0.00, samples=1 00:23:16.412 iops : min= 2604, max= 2604, avg=2604.00, stdev= 0.00, samples=1 00:23:16.412 lat (usec) : 250=99.38%, 500=0.60%, 750=0.02% 00:23:16.412 cpu : usr=1.40%, sys=7.30%, ctx=4666, majf=0, minf=7 00:23:16.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.412 issued rwts: total=2105,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:16.412 job3: (groupid=0, jobs=1): err= 0: pid=77629: Thu Apr 18 11:14:24 2024 00:23:16.412 read: IOPS=1161, BW=4647KiB/s (4759kB/s)(4652KiB/1001msec) 00:23:16.412 slat (nsec): min=9343, max=46076, avg=15728.93, stdev=4492.32 00:23:16.412 clat (usec): min=239, max=686, avg=390.54, stdev=43.36 00:23:16.412 lat (usec): min=251, max=719, avg=406.26, stdev=44.94 00:23:16.412 clat percentiles (usec): 00:23:16.412 | 1.00th=[ 258], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 367], 00:23:16.412 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 388], 00:23:16.412 | 70.00th=[ 396], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 486], 00:23:16.412 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 685], 00:23:16.412 | 99.99th=[ 685] 00:23:16.412 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:16.412 slat (nsec): min=11587, max=87122, avg=24495.10, stdev=6247.09 00:23:16.412 clat (usec): min=167, max=551, avg=315.55, stdev=43.67 00:23:16.412 lat (usec): min=191, max=577, avg=340.05, stdev=43.57 00:23:16.412 clat percentiles (usec): 00:23:16.412 | 1.00th=[ 194], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:23:16.412 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:23:16.412 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 375], 95.00th=[ 408], 00:23:16.412 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 553], 00:23:16.412 | 99.99th=[ 553] 00:23:16.412 bw ( KiB/s): min= 7232, max= 7232, per=23.57%, avg=7232.00, stdev= 0.00, samples=1 00:23:16.412 iops : min= 1808, max= 1808, avg=1808.00, stdev= 0.00, samples=1 00:23:16.412 lat (usec) : 250=1.48%, 500=97.30%, 750=1.22% 00:23:16.412 cpu : usr=1.20%, sys=4.30%, ctx=2704, majf=0, minf=15 00:23:16.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:16.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.412 issued rwts: total=1163,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:16.412 00:23:16.412 Run status group 0 (all jobs): 00:23:16.412 READ: bw=24.9MiB/s (26.1MB/s), 4647KiB/s-8420KiB/s (4759kB/s-8622kB/s), io=24.9MiB (26.1MB), run=1000-1001msec 00:23:16.412 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-10.0MiB/s (6285kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1000-1001msec 00:23:16.412 00:23:16.412 Disk stats (read/write): 00:23:16.412 nvme0n1: ios=1736/2048, merge=0/0, ticks=474/356, in_queue=830, util=87.95% 00:23:16.412 nvme0n2: ios=1044/1322, merge=0/0, ticks=422/417, in_queue=839, util=87.76% 00:23:16.412 nvme0n3: ios=1904/2048, merge=0/0, ticks=425/365, in_queue=790, util=89.04% 00:23:16.412 nvme0n4: ios=1024/1322, merge=0/0, ticks=387/420, in_queue=807, util=89.60% 00:23:16.412 11:14:24 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:16.412 [global] 00:23:16.412 thread=1 00:23:16.412 invalidate=1 00:23:16.412 rw=write 00:23:16.412 time_based=1 00:23:16.412 runtime=1 00:23:16.412 ioengine=libaio 00:23:16.412 direct=1 00:23:16.412 bs=4096 00:23:16.412 iodepth=128 00:23:16.412 norandommap=0 00:23:16.412 numjobs=1 00:23:16.412 00:23:16.412 verify_dump=1 00:23:16.412 verify_backlog=512 00:23:16.412 verify_state_save=0 00:23:16.412 do_verify=1 00:23:16.412 verify=crc32c-intel 00:23:16.412 [job0] 00:23:16.412 filename=/dev/nvme0n1 00:23:16.412 [job1] 00:23:16.412 filename=/dev/nvme0n2 00:23:16.412 [job2] 00:23:16.412 filename=/dev/nvme0n3 00:23:16.412 [job3] 00:23:16.412 filename=/dev/nvme0n4 00:23:16.412 Could not set queue depth (nvme0n1) 00:23:16.412 Could not set queue depth (nvme0n2) 00:23:16.412 Could not set queue depth (nvme0n3) 00:23:16.412 Could not set queue depth (nvme0n4) 00:23:16.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:16.412 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:16.412 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:16.412 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:16.412 fio-3.35 00:23:16.412 Starting 4 threads 00:23:17.788 00:23:17.788 job0: (groupid=0, jobs=1): err= 0: pid=77684: Thu Apr 18 11:14:25 2024 00:23:17.788 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:23:17.788 slat (usec): min=5, max=3593, avg=112.54, stdev=529.24 00:23:17.788 clat (usec): min=11283, max=17915, avg=14951.68, stdev=945.00 00:23:17.788 lat (usec): min=11723, max=20787, avg=15064.22, stdev=824.91 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[11994], 5.00th=[12780], 10.00th=[13435], 20.00th=[14615], 00:23:17.788 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:23:17.788 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15926], 95.00th=[16057], 00:23:17.788 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:23:17.788 | 99.99th=[17957] 00:23:17.788 write: IOPS=4552, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1004msec); 0 zone resets 00:23:17.788 slat (usec): min=10, max=3552, avg=110.01, stdev=499.23 00:23:17.788 clat (usec): min=2889, max=17595, avg=14316.98, stdev=1901.55 00:23:17.788 lat (usec): min=3169, max=17615, avg=14426.99, stdev=1901.49 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[ 7177], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:23:17.788 | 30.00th=[13173], 40.00th=[13435], 50.00th=[14484], 60.00th=[15401], 00:23:17.788 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16712], 00:23:17.788 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:23:17.788 | 99.99th=[17695] 00:23:17.788 bw ( KiB/s): min=17576, max=17976, per=26.80%, avg=17776.00, stdev=282.84, samples=2 00:23:17.788 iops : min= 4394, max= 4494, avg=4444.00, stdev=70.71, samples=2 00:23:17.788 lat (msec) : 4=0.38%, 10=0.43%, 20=99.19% 00:23:17.788 cpu : usr=3.99%, sys=12.46%, ctx=414, majf=0, minf=11 00:23:17.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:17.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.788 issued rwts: total=4096,4571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.788 job1: (groupid=0, jobs=1): err= 0: pid=77685: Thu Apr 18 11:14:25 2024 00:23:17.788 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:23:17.788 slat (usec): min=6, max=4503, avg=113.02, stdev=592.81 00:23:17.788 clat (usec): min=10905, max=19556, avg=14897.13, stdev=1170.71 00:23:17.788 lat (usec): min=10925, max=19589, avg=15010.15, stdev=1230.31 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[11731], 5.00th=[12256], 10.00th=[13435], 20.00th=[14353], 00:23:17.788 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:23:17.788 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16057], 95.00th=[16319], 00:23:17.788 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:23:17.788 | 99.99th=[19530] 00:23:17.788 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1005msec); 0 zone resets 00:23:17.788 slat (usec): min=11, max=4287, avg=108.71, stdev=518.84 00:23:17.788 clat (usec): min=3606, max=18734, avg=14329.20, stdev=1631.14 00:23:17.788 lat (usec): min=4318, max=18757, avg=14437.91, stdev=1610.25 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[ 9110], 5.00th=[11207], 10.00th=[11600], 20.00th=[13960], 00:23:17.788 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:23:17.788 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:23:17.788 | 99.00th=[16188], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:23:17.788 | 99.99th=[18744] 00:23:17.788 bw ( KiB/s): min=17768, max=17960, per=26.93%, avg=17864.00, stdev=135.76, samples=2 00:23:17.788 iops : min= 4442, max= 4490, avg=4466.00, stdev=33.94, samples=2 00:23:17.788 lat (msec) : 4=0.01%, 10=0.55%, 20=99.44% 00:23:17.788 cpu : usr=4.18%, sys=13.35%, ctx=350, majf=0, minf=11 00:23:17.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:17.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.788 issued rwts: total=4096,4594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.788 job2: (groupid=0, jobs=1): err= 0: pid=77686: Thu Apr 18 11:14:25 2024 00:23:17.788 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:23:17.788 slat (usec): min=6, max=5090, avg=132.11, stdev=647.59 00:23:17.788 clat (usec): min=13102, max=21129, avg=17584.28, stdev=1139.44 00:23:17.788 lat (usec): min=13741, max=24676, avg=17716.39, stdev=974.94 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[13698], 5.00th=[15139], 10.00th=[16712], 20.00th=[17171], 00:23:17.788 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:23:17.788 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:23:17.788 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:23:17.788 | 99.99th=[21103] 00:23:17.788 write: IOPS=3729, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1004msec); 0 zone resets 00:23:17.788 slat (usec): min=11, max=5460, avg=131.85, stdev=601.72 00:23:17.788 clat (usec): min=3669, max=22076, avg=16990.70, stdev=2193.66 00:23:17.788 lat (usec): min=4522, max=22099, avg=17122.55, stdev=2179.02 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[ 9110], 5.00th=[14222], 10.00th=[14615], 20.00th=[15139], 00:23:17.788 | 30.00th=[15270], 40.00th=[16057], 50.00th=[17695], 60.00th=[18220], 00:23:17.788 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:23:17.788 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:23:17.788 | 99.99th=[22152] 00:23:17.788 bw ( KiB/s): min=12552, max=16384, per=21.81%, avg=14468.00, stdev=2709.63, samples=2 00:23:17.788 iops : min= 3138, max= 4096, avg=3617.00, stdev=677.41, samples=2 00:23:17.788 lat (msec) : 4=0.01%, 10=0.59%, 20=97.01%, 50=2.39% 00:23:17.788 cpu : usr=3.49%, sys=11.27%, ctx=336, majf=0, minf=15 00:23:17.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:17.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.788 issued rwts: total=3584,3744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.788 job3: (groupid=0, jobs=1): err= 0: pid=77687: Thu Apr 18 11:14:25 2024 00:23:17.788 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:23:17.788 slat (usec): min=8, max=5358, avg=130.76, stdev=634.66 00:23:17.788 clat (usec): min=13233, max=20484, avg=17504.98, stdev=1094.60 00:23:17.788 lat (usec): min=14108, max=20801, avg=17635.74, stdev=932.39 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[13829], 5.00th=[15139], 10.00th=[16057], 20.00th=[17171], 00:23:17.788 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17695], 60.00th=[17695], 00:23:17.788 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:23:17.788 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20317], 00:23:17.788 | 99.99th=[20579] 00:23:17.788 write: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec); 0 zone resets 00:23:17.788 slat (usec): min=12, max=4833, avg=133.16, stdev=609.57 00:23:17.788 clat (usec): min=417, max=21639, avg=16973.74, stdev=2288.72 00:23:17.788 lat (usec): min=5161, max=21680, avg=17106.91, stdev=2284.68 00:23:17.788 clat percentiles (usec): 00:23:17.788 | 1.00th=[ 6456], 5.00th=[14615], 10.00th=[15008], 20.00th=[15139], 00:23:17.788 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16319], 60.00th=[18220], 00:23:17.788 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:23:17.788 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21627], 99.95th=[21627], 00:23:17.788 | 99.99th=[21627] 00:23:17.788 bw ( KiB/s): min=12656, max=16416, per=21.91%, avg=14536.00, stdev=2658.72, samples=2 00:23:17.788 iops : min= 3164, max= 4104, avg=3634.00, stdev=664.68, samples=2 00:23:17.788 lat (usec) : 500=0.01% 00:23:17.788 lat (msec) : 10=0.52%, 20=96.73%, 50=2.74% 00:23:17.788 cpu : usr=4.09%, sys=11.27%, ctx=344, majf=0, minf=10 00:23:17.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:17.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.788 issued rwts: total=3584,3758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.788 00:23:17.788 Run status group 0 (all jobs): 00:23:17.788 READ: bw=59.7MiB/s (62.6MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=60.0MiB (62.9MB), run=1004-1005msec 00:23:17.788 WRITE: bw=64.8MiB/s (67.9MB/s), 14.6MiB/s-17.9MiB/s (15.3MB/s-18.7MB/s), io=65.1MiB (68.3MB), run=1004-1005msec 00:23:17.788 00:23:17.788 Disk stats (read/write): 00:23:17.788 nvme0n1: ios=3634/3880, merge=0/0, ticks=12346/12307, in_queue=24653, util=89.37% 00:23:17.788 nvme0n2: ios=3633/3930, merge=0/0, ticks=16144/15987, in_queue=32131, util=89.57% 00:23:17.788 nvme0n3: ios=3089/3273, merge=0/0, ticks=12504/12579, in_queue=25083, util=89.60% 00:23:17.788 nvme0n4: ios=3072/3288, merge=0/0, ticks=12495/12283, in_queue=24778, util=89.74% 00:23:17.788 11:14:25 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:17.788 [global] 00:23:17.788 thread=1 00:23:17.788 invalidate=1 00:23:17.788 rw=randwrite 00:23:17.788 time_based=1 00:23:17.788 runtime=1 00:23:17.788 ioengine=libaio 00:23:17.788 direct=1 00:23:17.788 bs=4096 00:23:17.788 iodepth=128 00:23:17.788 norandommap=0 00:23:17.788 numjobs=1 00:23:17.788 00:23:17.788 verify_dump=1 00:23:17.788 verify_backlog=512 00:23:17.788 verify_state_save=0 00:23:17.788 do_verify=1 00:23:17.788 verify=crc32c-intel 00:23:17.788 [job0] 00:23:17.788 filename=/dev/nvme0n1 00:23:17.788 [job1] 00:23:17.788 filename=/dev/nvme0n2 00:23:17.788 [job2] 00:23:17.788 filename=/dev/nvme0n3 00:23:17.788 [job3] 00:23:17.788 filename=/dev/nvme0n4 00:23:17.788 Could not set queue depth (nvme0n1) 00:23:17.788 Could not set queue depth (nvme0n2) 00:23:17.788 Could not set queue depth (nvme0n3) 00:23:17.788 Could not set queue depth (nvme0n4) 00:23:17.788 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:17.788 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:17.788 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:17.788 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:17.788 fio-3.35 00:23:17.788 Starting 4 threads 00:23:19.163 00:23:19.163 job0: (groupid=0, jobs=1): err= 0: pid=77746: Thu Apr 18 11:14:27 2024 00:23:19.163 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:23:19.163 slat (usec): min=5, max=11853, avg=107.39, stdev=662.27 00:23:19.163 clat (usec): min=5360, max=26308, avg=14069.20, stdev=3305.17 00:23:19.163 lat (usec): min=5381, max=26327, avg=14176.59, stdev=3342.86 00:23:19.163 clat percentiles (usec): 00:23:19.163 | 1.00th=[ 9896], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:23:19.163 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:23:19.163 | 70.00th=[14615], 80.00th=[16188], 90.00th=[18482], 95.00th=[21365], 00:23:19.163 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:23:19.163 | 99.99th=[26346] 00:23:19.163 write: IOPS=4924, BW=19.2MiB/s (20.2MB/s)(19.4MiB/1008msec); 0 zone resets 00:23:19.163 slat (usec): min=4, max=11474, avg=94.26, stdev=587.21 00:23:19.163 clat (usec): min=1975, max=26206, avg=12660.15, stdev=2376.47 00:23:19.163 lat (usec): min=4667, max=26216, avg=12754.41, stdev=2438.84 00:23:19.163 clat percentiles (usec): 00:23:19.163 | 1.00th=[ 5276], 5.00th=[ 7242], 10.00th=[ 9503], 20.00th=[11338], 00:23:19.163 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13566], 60.00th=[13960], 00:23:19.163 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14615], 95.00th=[14877], 00:23:19.163 | 99.00th=[15139], 99.50th=[19006], 99.90th=[25560], 99.95th=[26084], 00:23:19.163 | 99.99th=[26084] 00:23:19.163 bw ( KiB/s): min=18224, max=20464, per=35.70%, avg=19344.00, stdev=1583.92, samples=2 00:23:19.163 iops : min= 4556, max= 5116, avg=4836.00, stdev=395.98, samples=2 00:23:19.163 lat (msec) : 2=0.01%, 10=6.68%, 20=89.76%, 50=3.55% 00:23:19.163 cpu : usr=5.16%, sys=13.70%, ctx=548, majf=0, minf=12 00:23:19.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:19.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.163 issued rwts: total=4608,4964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.163 job1: (groupid=0, jobs=1): err= 0: pid=77747: Thu Apr 18 11:14:27 2024 00:23:19.163 read: IOPS=4500, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1006msec) 00:23:19.163 slat (usec): min=3, max=12335, avg=117.82, stdev=751.50 00:23:19.163 clat (usec): min=2820, max=26571, avg=14662.49, stdev=3760.94 00:23:19.163 lat (usec): min=5013, max=26590, avg=14780.31, stdev=3792.33 00:23:19.163 clat percentiles (usec): 00:23:19.163 | 1.00th=[ 5932], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:23:19.163 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13698], 60.00th=[14484], 00:23:19.163 | 70.00th=[15926], 80.00th=[16909], 90.00th=[20317], 95.00th=[22676], 00:23:19.163 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:23:19.163 | 99.99th=[26608] 00:23:19.163 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:23:19.163 slat (usec): min=4, max=11365, avg=93.77, stdev=402.53 00:23:19.163 clat (usec): min=4308, max=26490, avg=13256.63, stdev=2882.69 00:23:19.163 lat (usec): min=4328, max=26498, avg=13350.40, stdev=2913.35 00:23:19.163 clat percentiles (usec): 00:23:19.163 | 1.00th=[ 5407], 5.00th=[ 6587], 10.00th=[ 7898], 20.00th=[11731], 00:23:19.163 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:23:19.163 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15270], 95.00th=[15533], 00:23:19.163 | 99.00th=[16581], 99.50th=[17695], 99.90th=[26084], 99.95th=[26346], 00:23:19.163 | 99.99th=[26608] 00:23:19.163 bw ( KiB/s): min=17552, max=19312, per=34.02%, avg=18432.00, stdev=1244.51, samples=2 00:23:19.163 iops : min= 4388, max= 4828, avg=4608.00, stdev=311.13, samples=2 00:23:19.163 lat (msec) : 4=0.01%, 10=9.14%, 20=85.30%, 50=5.55% 00:23:19.163 cpu : usr=4.68%, sys=10.75%, ctx=694, majf=0, minf=11 00:23:19.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:19.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.163 issued rwts: total=4527,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.163 job2: (groupid=0, jobs=1): err= 0: pid=77748: Thu Apr 18 11:14:27 2024 00:23:19.163 read: IOPS=1766, BW=7064KiB/s (7234kB/s)(7128KiB/1009msec) 00:23:19.163 slat (usec): min=4, max=24296, avg=271.35, stdev=1502.83 00:23:19.164 clat (usec): min=2961, max=54879, avg=32434.84, stdev=6913.84 00:23:19.164 lat (usec): min=8291, max=54922, avg=32706.19, stdev=6979.55 00:23:19.164 clat percentiles (usec): 00:23:19.164 | 1.00th=[ 8717], 5.00th=[21365], 10.00th=[27657], 20.00th=[30016], 00:23:19.164 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31851], 00:23:19.164 | 70.00th=[34341], 80.00th=[38011], 90.00th=[41681], 95.00th=[44827], 00:23:19.164 | 99.00th=[45351], 99.50th=[45876], 99.90th=[49021], 99.95th=[54789], 00:23:19.164 | 99.99th=[54789] 00:23:19.164 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:23:19.164 slat (usec): min=4, max=27331, avg=246.31, stdev=1301.58 00:23:19.164 clat (usec): min=5374, max=57377, avg=34158.00, stdev=9131.17 00:23:19.164 lat (usec): min=5403, max=57410, avg=34404.31, stdev=9269.93 00:23:19.164 clat percentiles (usec): 00:23:19.164 | 1.00th=[10945], 5.00th=[13304], 10.00th=[22414], 20.00th=[29754], 00:23:19.164 | 30.00th=[30016], 40.00th=[32637], 50.00th=[33424], 60.00th=[35390], 00:23:19.164 | 70.00th=[38536], 80.00th=[42206], 90.00th=[45876], 95.00th=[47973], 00:23:19.164 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[56886], 00:23:19.164 | 99.99th=[57410] 00:23:19.164 bw ( KiB/s): min= 8192, max= 8208, per=15.13%, avg=8200.00, stdev=11.31, samples=2 00:23:19.164 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:23:19.164 lat (msec) : 4=0.03%, 10=1.51%, 20=4.83%, 50=92.11%, 100=1.51% 00:23:19.164 cpu : usr=2.48%, sys=5.26%, ctx=535, majf=0, minf=7 00:23:19.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:19.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.164 issued rwts: total=1782,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.164 job3: (groupid=0, jobs=1): err= 0: pid=77749: Thu Apr 18 11:14:27 2024 00:23:19.164 read: IOPS=1772, BW=7091KiB/s (7261kB/s)(7148KiB/1008msec) 00:23:19.164 slat (usec): min=4, max=17481, avg=261.26, stdev=1362.82 00:23:19.164 clat (usec): min=7053, max=48703, avg=31099.43, stdev=6359.75 00:23:19.164 lat (usec): min=7062, max=50386, avg=31360.68, stdev=6467.61 00:23:19.164 clat percentiles (usec): 00:23:19.164 | 1.00th=[ 7635], 5.00th=[21627], 10.00th=[23462], 20.00th=[27395], 00:23:19.164 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30540], 60.00th=[31327], 00:23:19.164 | 70.00th=[32637], 80.00th=[35914], 90.00th=[39584], 95.00th=[42206], 00:23:19.164 | 99.00th=[45351], 99.50th=[46400], 99.90th=[47973], 99.95th=[48497], 00:23:19.164 | 99.99th=[48497] 00:23:19.164 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:23:19.164 slat (usec): min=5, max=37045, avg=252.15, stdev=1391.90 00:23:19.164 clat (usec): min=5314, max=67361, avg=35208.53, stdev=7504.26 00:23:19.164 lat (usec): min=5330, max=67436, avg=35460.68, stdev=7622.92 00:23:19.164 clat percentiles (usec): 00:23:19.164 | 1.00th=[ 8586], 5.00th=[23725], 10.00th=[27919], 20.00th=[29492], 00:23:19.164 | 30.00th=[31851], 40.00th=[32900], 50.00th=[34866], 60.00th=[36963], 00:23:19.164 | 70.00th=[39584], 80.00th=[42206], 90.00th=[44827], 95.00th=[45876], 00:23:19.164 | 99.00th=[47973], 99.50th=[50070], 99.90th=[53740], 99.95th=[54264], 00:23:19.164 | 99.99th=[67634] 00:23:19.164 bw ( KiB/s): min= 8192, max= 8192, per=15.12%, avg=8192.00, stdev= 0.00, samples=2 00:23:19.164 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:23:19.164 lat (msec) : 10=1.33%, 20=2.27%, 50=96.17%, 100=0.23% 00:23:19.164 cpu : usr=2.48%, sys=4.87%, ctx=605, majf=0, minf=13 00:23:19.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:19.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.164 issued rwts: total=1787,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.164 00:23:19.164 Run status group 0 (all jobs): 00:23:19.164 READ: bw=49.2MiB/s (51.6MB/s), 7064KiB/s-17.9MiB/s (7234kB/s-18.7MB/s), io=49.6MiB (52.0MB), run=1006-1009msec 00:23:19.164 WRITE: bw=52.9MiB/s (55.5MB/s), 8119KiB/s-19.2MiB/s (8314kB/s-20.2MB/s), io=53.4MiB (56.0MB), run=1006-1009msec 00:23:19.164 00:23:19.164 Disk stats (read/write): 00:23:19.164 nvme0n1: ios=4071/4096, merge=0/0, ticks=52825/49398, in_queue=102223, util=89.68% 00:23:19.164 nvme0n2: ios=3692/4096, merge=0/0, ticks=51272/52972, in_queue=104244, util=89.68% 00:23:19.164 nvme0n3: ios=1567/1699, merge=0/0, ticks=33581/37992, in_queue=71573, util=90.61% 00:23:19.164 nvme0n4: ios=1553/1740, merge=0/0, ticks=30743/41457, in_queue=72200, util=89.92% 00:23:19.164 11:14:27 -- target/fio.sh@55 -- # sync 00:23:19.164 11:14:27 -- target/fio.sh@59 -- # fio_pid=77762 00:23:19.164 11:14:27 -- target/fio.sh@61 -- # sleep 3 00:23:19.164 11:14:27 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:19.164 [global] 00:23:19.164 thread=1 00:23:19.164 invalidate=1 00:23:19.164 rw=read 00:23:19.164 time_based=1 00:23:19.164 runtime=10 00:23:19.164 ioengine=libaio 00:23:19.164 direct=1 00:23:19.164 bs=4096 00:23:19.164 iodepth=1 00:23:19.164 norandommap=1 00:23:19.164 numjobs=1 00:23:19.164 00:23:19.164 [job0] 00:23:19.164 filename=/dev/nvme0n1 00:23:19.164 [job1] 00:23:19.164 filename=/dev/nvme0n2 00:23:19.164 [job2] 00:23:19.164 filename=/dev/nvme0n3 00:23:19.164 [job3] 00:23:19.164 filename=/dev/nvme0n4 00:23:19.164 Could not set queue depth (nvme0n1) 00:23:19.164 Could not set queue depth (nvme0n2) 00:23:19.164 Could not set queue depth (nvme0n3) 00:23:19.164 Could not set queue depth (nvme0n4) 00:23:19.164 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:19.164 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:19.164 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:19.164 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:19.164 fio-3.35 00:23:19.164 Starting 4 threads 00:23:22.450 11:14:30 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:22.450 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30842880, buflen=4096 00:23:22.450 fio: pid=77805, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:22.450 11:14:30 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:22.450 fio: pid=77804, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:22.450 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=52080640, buflen=4096 00:23:22.450 11:14:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:22.450 11:14:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:22.707 fio: pid=77802, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:22.707 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=58322944, buflen=4096 00:23:22.965 11:14:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:22.965 11:14:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:23.224 fio: pid=77803, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:23.224 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=40443904, buflen=4096 00:23:23.224 00:23:23.224 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77802: Thu Apr 18 11:14:31 2024 00:23:23.224 read: IOPS=4175, BW=16.3MiB/s (17.1MB/s)(55.6MiB/3410msec) 00:23:23.224 slat (usec): min=10, max=12524, avg=17.37, stdev=165.39 00:23:23.224 clat (usec): min=184, max=1978, avg=220.78, stdev=44.29 00:23:23.224 lat (usec): min=199, max=12804, avg=238.15, stdev=172.36 00:23:23.224 clat percentiles (usec): 00:23:23.224 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:23:23.224 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:23:23.224 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 269], 00:23:23.224 | 99.00th=[ 367], 99.50th=[ 437], 99.90th=[ 578], 99.95th=[ 775], 00:23:23.224 | 99.99th=[ 1795] 00:23:23.224 bw ( KiB/s): min=16584, max=17544, per=37.13%, avg=17221.33, stdev=382.49, samples=6 00:23:23.224 iops : min= 4146, max= 4386, avg=4305.33, stdev=95.62, samples=6 00:23:23.224 lat (usec) : 250=93.95%, 500=5.88%, 750=0.09%, 1000=0.03% 00:23:23.224 lat (msec) : 2=0.04% 00:23:23.224 cpu : usr=0.97%, sys=5.10%, ctx=14250, majf=0, minf=1 00:23:23.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 issued rwts: total=14240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:23.224 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77803: Thu Apr 18 11:14:31 2024 00:23:23.224 read: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(38.6MiB/3826msec) 00:23:23.224 slat (usec): min=8, max=15933, avg=22.94, stdev=222.52 00:23:23.224 clat (usec): min=121, max=168035, avg=362.93, stdev=1698.45 00:23:23.224 lat (usec): min=192, max=168064, avg=385.86, stdev=1713.00 00:23:23.224 clat percentiles (usec): 00:23:23.224 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 334], 00:23:23.224 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 363], 00:23:23.224 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 400], 00:23:23.224 | 99.00th=[ 449], 99.50th=[ 506], 99.90th=[ 1074], 99.95th=[ 3130], 00:23:23.224 | 99.99th=[168821] 00:23:23.224 bw ( KiB/s): min= 8058, max=10712, per=21.57%, avg=10001.43, stdev=926.05, samples=7 00:23:23.224 iops : min= 2014, max= 2678, avg=2500.29, stdev=231.69, samples=7 00:23:23.224 lat (usec) : 250=13.38%, 500=86.08%, 750=0.43%, 1000=0.01% 00:23:23.224 lat (msec) : 2=0.04%, 4=0.04%, 20=0.01%, 250=0.01% 00:23:23.224 cpu : usr=0.81%, sys=4.00%, ctx=9898, majf=0, minf=1 00:23:23.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 issued rwts: total=9875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:23.224 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77804: Thu Apr 18 11:14:31 2024 00:23:23.224 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(49.7MiB/3151msec) 00:23:23.224 slat (usec): min=12, max=11687, avg=18.15, stdev=135.36 00:23:23.224 clat (usec): min=195, max=1974, avg=228.11, stdev=39.34 00:23:23.224 lat (usec): min=207, max=11966, avg=246.27, stdev=141.73 00:23:23.224 clat percentiles (usec): 00:23:23.224 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:23:23.224 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:23:23.224 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 258], 00:23:23.224 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 441], 99.95th=[ 594], 00:23:23.224 | 99.99th=[ 1795] 00:23:23.224 bw ( KiB/s): min=15248, max=16920, per=35.42%, avg=16428.00, stdev=635.52, samples=6 00:23:23.224 iops : min= 3812, max= 4230, avg=4107.00, stdev=158.88, samples=6 00:23:23.224 lat (usec) : 250=93.62%, 500=6.30%, 750=0.02%, 1000=0.02% 00:23:23.224 lat (msec) : 2=0.03% 00:23:23.224 cpu : usr=1.21%, sys=5.40%, ctx=12722, majf=0, minf=1 00:23:23.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 issued rwts: total=12716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:23.224 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77805: Thu Apr 18 11:14:31 2024 00:23:23.224 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(29.4MiB/2908msec) 00:23:23.224 slat (usec): min=9, max=333, avg=22.35, stdev=11.99 00:23:23.224 clat (usec): min=204, max=4004, avg=361.47, stdev=58.67 00:23:23.224 lat (usec): min=221, max=4036, avg=383.83, stdev=60.55 00:23:23.224 clat percentiles (usec): 00:23:23.224 | 1.00th=[ 237], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 347], 00:23:23.224 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:23:23.224 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 396], 00:23:23.224 | 99.00th=[ 420], 99.50th=[ 486], 99.90th=[ 848], 99.95th=[ 1139], 00:23:23.224 | 99.99th=[ 4015] 00:23:23.224 bw ( KiB/s): min=10008, max=10680, per=22.26%, avg=10323.20, stdev=320.28, samples=5 00:23:23.224 iops : min= 2502, max= 2670, avg=2580.80, stdev=80.07, samples=5 00:23:23.224 lat (usec) : 250=1.25%, 500=98.33%, 750=0.27%, 1000=0.05% 00:23:23.224 lat (msec) : 2=0.07%, 4=0.01%, 10=0.01% 00:23:23.224 cpu : usr=1.24%, sys=4.92%, ctx=7537, majf=0, minf=1 00:23:23.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.224 issued rwts: total=7531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:23.224 00:23:23.224 Run status group 0 (all jobs): 00:23:23.224 READ: bw=45.3MiB/s (47.5MB/s), 10.1MiB/s-16.3MiB/s (10.6MB/s-17.1MB/s), io=173MiB (182MB), run=2908-3826msec 00:23:23.224 00:23:23.224 Disk stats (read/write): 00:23:23.224 nvme0n1: ios=14124/0, merge=0/0, ticks=3166/0, in_queue=3166, util=95.31% 00:23:23.224 nvme0n2: ios=8975/0, merge=0/0, ticks=3413/0, in_queue=3413, util=95.56% 00:23:23.224 nvme0n3: ios=12628/0, merge=0/0, ticks=2911/0, in_queue=2911, util=96.21% 00:23:23.224 nvme0n4: ios=7427/0, merge=0/0, ticks=2688/0, in_queue=2688, util=96.76% 00:23:23.224 11:14:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:23.224 11:14:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:23.791 11:14:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:23.791 11:14:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:24.405 11:14:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:24.405 11:14:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:24.663 11:14:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:24.663 11:14:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:25.229 11:14:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:25.229 11:14:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:25.487 11:14:33 -- target/fio.sh@69 -- # fio_status=0 00:23:25.487 11:14:33 -- target/fio.sh@70 -- # wait 77762 00:23:25.487 11:14:33 -- target/fio.sh@70 -- # fio_status=4 00:23:25.487 11:14:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:25.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:25.487 11:14:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:25.487 11:14:33 -- common/autotest_common.sh@1205 -- # local i=0 00:23:25.487 11:14:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:25.487 11:14:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:25.487 11:14:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:25.487 11:14:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:25.487 nvmf hotplug test: fio failed as expected 00:23:25.487 11:14:33 -- common/autotest_common.sh@1217 -- # return 0 00:23:25.487 11:14:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:25.487 11:14:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:25.487 11:14:33 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.745 11:14:33 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:25.745 11:14:33 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:25.745 11:14:33 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:25.745 11:14:33 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:25.745 11:14:33 -- target/fio.sh@91 -- # nvmftestfini 00:23:25.745 11:14:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:25.745 11:14:33 -- nvmf/common.sh@117 -- # sync 00:23:25.745 11:14:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.745 11:14:33 -- nvmf/common.sh@120 -- # set +e 00:23:25.745 11:14:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.745 11:14:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.745 rmmod nvme_tcp 00:23:26.003 rmmod nvme_fabrics 00:23:26.003 rmmod nvme_keyring 00:23:26.003 11:14:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.003 11:14:34 -- nvmf/common.sh@124 -- # set -e 00:23:26.003 11:14:34 -- nvmf/common.sh@125 -- # return 0 00:23:26.003 11:14:34 -- nvmf/common.sh@478 -- # '[' -n 77266 ']' 00:23:26.003 11:14:34 -- nvmf/common.sh@479 -- # killprocess 77266 00:23:26.003 11:14:34 -- common/autotest_common.sh@936 -- # '[' -z 77266 ']' 00:23:26.003 11:14:34 -- common/autotest_common.sh@940 -- # kill -0 77266 00:23:26.003 11:14:34 -- common/autotest_common.sh@941 -- # uname 00:23:26.003 11:14:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.003 11:14:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77266 00:23:26.003 killing process with pid 77266 00:23:26.003 11:14:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:26.003 11:14:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:26.003 11:14:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77266' 00:23:26.003 11:14:34 -- common/autotest_common.sh@955 -- # kill 77266 00:23:26.003 11:14:34 -- common/autotest_common.sh@960 -- # wait 77266 00:23:27.385 11:14:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:27.385 11:14:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:27.385 11:14:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:27.385 11:14:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.385 11:14:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.385 11:14:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.385 11:14:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.385 11:14:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.385 11:14:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:27.385 00:23:27.385 real 0m22.395s 00:23:27.385 user 1m24.772s 00:23:27.385 sys 0m8.688s 00:23:27.385 11:14:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.385 11:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:27.385 ************************************ 00:23:27.385 END TEST nvmf_fio_target 00:23:27.385 ************************************ 00:23:27.385 11:14:35 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:27.385 11:14:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:27.385 11:14:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.385 11:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:27.385 ************************************ 00:23:27.385 START TEST nvmf_bdevio 00:23:27.385 ************************************ 00:23:27.385 11:14:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:27.385 * Looking for test storage... 00:23:27.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:27.385 11:14:35 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.385 11:14:35 -- nvmf/common.sh@7 -- # uname -s 00:23:27.385 11:14:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.385 11:14:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.385 11:14:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.385 11:14:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.385 11:14:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.385 11:14:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.385 11:14:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.385 11:14:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.386 11:14:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.386 11:14:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.386 11:14:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:27.386 11:14:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:27.386 11:14:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.386 11:14:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.386 11:14:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:27.386 11:14:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.386 11:14:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.386 11:14:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.386 11:14:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.386 11:14:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.386 11:14:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.386 11:14:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.386 11:14:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.386 11:14:35 -- paths/export.sh@5 -- # export PATH 00:23:27.386 11:14:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.386 11:14:35 -- nvmf/common.sh@47 -- # : 0 00:23:27.386 11:14:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.386 11:14:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.386 11:14:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.386 11:14:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.386 11:14:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.386 11:14:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.386 11:14:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.386 11:14:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.386 11:14:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.386 11:14:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.386 11:14:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:27.386 11:14:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:27.386 11:14:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.386 11:14:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:27.386 11:14:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:27.386 11:14:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:27.386 11:14:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.386 11:14:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.386 11:14:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.386 11:14:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:27.386 11:14:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:27.386 11:14:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:27.386 11:14:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:27.386 11:14:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:27.386 11:14:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:27.386 11:14:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.386 11:14:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.386 11:14:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:27.386 11:14:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:27.386 11:14:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:27.386 11:14:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:27.386 11:14:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:27.386 11:14:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.386 11:14:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:27.386 11:14:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:27.386 11:14:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:27.386 11:14:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:27.386 11:14:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:27.386 11:14:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:27.386 Cannot find device "nvmf_tgt_br" 00:23:27.386 11:14:35 -- nvmf/common.sh@155 -- # true 00:23:27.386 11:14:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:27.386 Cannot find device "nvmf_tgt_br2" 00:23:27.386 11:14:35 -- nvmf/common.sh@156 -- # true 00:23:27.386 11:14:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:27.386 11:14:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:27.386 Cannot find device "nvmf_tgt_br" 00:23:27.386 11:14:35 -- nvmf/common.sh@158 -- # true 00:23:27.386 11:14:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:27.386 Cannot find device "nvmf_tgt_br2" 00:23:27.386 11:14:35 -- nvmf/common.sh@159 -- # true 00:23:27.386 11:14:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:27.658 11:14:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:27.658 11:14:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:27.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:27.658 11:14:35 -- nvmf/common.sh@162 -- # true 00:23:27.658 11:14:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:27.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:27.658 11:14:35 -- nvmf/common.sh@163 -- # true 00:23:27.658 11:14:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:27.658 11:14:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:27.658 11:14:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:27.658 11:14:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:27.658 11:14:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:27.658 11:14:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:27.658 11:14:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:27.658 11:14:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:27.658 11:14:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:27.658 11:14:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:27.658 11:14:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:27.658 11:14:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:27.658 11:14:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:27.658 11:14:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:27.658 11:14:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:27.658 11:14:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:27.658 11:14:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:27.658 11:14:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:27.658 11:14:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:27.658 11:14:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:27.658 11:14:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:27.658 11:14:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:27.658 11:14:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:27.658 11:14:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:27.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:27.658 00:23:27.658 --- 10.0.0.2 ping statistics --- 00:23:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.658 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:27.658 11:14:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:27.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:27.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:27.658 00:23:27.658 --- 10.0.0.3 ping statistics --- 00:23:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.658 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:27.658 11:14:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:27.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:23:27.658 00:23:27.658 --- 10.0.0.1 ping statistics --- 00:23:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.658 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:27.658 11:14:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.658 11:14:35 -- nvmf/common.sh@422 -- # return 0 00:23:27.658 11:14:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:27.658 11:14:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.658 11:14:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:27.658 11:14:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:27.658 11:14:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.658 11:14:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:27.658 11:14:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:27.658 11:14:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:27.658 11:14:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:27.658 11:14:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:27.658 11:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:27.658 11:14:35 -- nvmf/common.sh@470 -- # nvmfpid=78165 00:23:27.658 11:14:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:27.658 11:14:35 -- nvmf/common.sh@471 -- # waitforlisten 78165 00:23:27.658 11:14:35 -- common/autotest_common.sh@817 -- # '[' -z 78165 ']' 00:23:27.658 11:14:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.658 11:14:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:27.658 11:14:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.658 11:14:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:27.658 11:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:27.918 [2024-04-18 11:14:35.945681] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:27.918 [2024-04-18 11:14:35.946132] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.918 [2024-04-18 11:14:36.128445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.483 [2024-04-18 11:14:36.415462] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.483 [2024-04-18 11:14:36.415764] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.483 [2024-04-18 11:14:36.415800] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.483 [2024-04-18 11:14:36.415816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.483 [2024-04-18 11:14:36.415832] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.483 [2024-04-18 11:14:36.416083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.483 [2024-04-18 11:14:36.416204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:28.483 [2024-04-18 11:14:36.416330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:28.483 [2024-04-18 11:14:36.416345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.741 11:14:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.741 11:14:36 -- common/autotest_common.sh@850 -- # return 0 00:23:28.741 11:14:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:28.741 11:14:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:28.741 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:28.741 11:14:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.741 11:14:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.741 11:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.741 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:28.741 [2024-04-18 11:14:36.913942] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.741 11:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.741 11:14:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.741 11:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.741 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 Malloc0 00:23:29.000 11:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.000 11:14:37 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.000 11:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.000 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 11:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.000 11:14:37 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.000 11:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.000 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 11:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.000 11:14:37 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.000 11:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.000 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:29.000 [2024-04-18 11:14:37.057668] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.000 11:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.000 11:14:37 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:29.000 11:14:37 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:29.000 11:14:37 -- nvmf/common.sh@521 -- # config=() 00:23:29.000 11:14:37 -- nvmf/common.sh@521 -- # local subsystem config 00:23:29.000 11:14:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:29.000 11:14:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:29.000 { 00:23:29.000 "params": { 00:23:29.000 "name": "Nvme$subsystem", 00:23:29.000 "trtype": "$TEST_TRANSPORT", 00:23:29.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.000 "adrfam": "ipv4", 00:23:29.000 "trsvcid": "$NVMF_PORT", 00:23:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.000 "hdgst": ${hdgst:-false}, 00:23:29.000 "ddgst": ${ddgst:-false} 00:23:29.000 }, 00:23:29.000 "method": "bdev_nvme_attach_controller" 00:23:29.000 } 00:23:29.000 EOF 00:23:29.000 )") 00:23:29.000 11:14:37 -- nvmf/common.sh@543 -- # cat 00:23:29.000 11:14:37 -- nvmf/common.sh@545 -- # jq . 00:23:29.000 11:14:37 -- nvmf/common.sh@546 -- # IFS=, 00:23:29.000 11:14:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:29.000 "params": { 00:23:29.000 "name": "Nvme1", 00:23:29.000 "trtype": "tcp", 00:23:29.000 "traddr": "10.0.0.2", 00:23:29.000 "adrfam": "ipv4", 00:23:29.000 "trsvcid": "4420", 00:23:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.000 "hdgst": false, 00:23:29.000 "ddgst": false 00:23:29.000 }, 00:23:29.000 "method": "bdev_nvme_attach_controller" 00:23:29.000 }' 00:23:29.000 [2024-04-18 11:14:37.174256] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:29.000 [2024-04-18 11:14:37.174427] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78219 ] 00:23:29.259 [2024-04-18 11:14:37.352733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:29.517 [2024-04-18 11:14:37.632761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.517 [2024-04-18 11:14:37.632835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.517 [2024-04-18 11:14:37.632841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.084 I/O targets: 00:23:30.084 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:30.084 00:23:30.084 00:23:30.084 CUnit - A unit testing framework for C - Version 2.1-3 00:23:30.084 http://cunit.sourceforge.net/ 00:23:30.084 00:23:30.084 00:23:30.084 Suite: bdevio tests on: Nvme1n1 00:23:30.084 Test: blockdev write read block ...passed 00:23:30.084 Test: blockdev write zeroes read block ...passed 00:23:30.084 Test: blockdev write zeroes read no split ...passed 00:23:30.084 Test: blockdev write zeroes read split ...passed 00:23:30.084 Test: blockdev write zeroes read split partial ...passed 00:23:30.084 Test: blockdev reset ...[2024-04-18 11:14:38.212663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.084 [2024-04-18 11:14:38.213007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:23:30.084 [2024-04-18 11:14:38.231175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.084 passed 00:23:30.084 Test: blockdev write read 8 blocks ...passed 00:23:30.084 Test: blockdev write read size > 128k ...passed 00:23:30.084 Test: blockdev write read invalid size ...passed 00:23:30.084 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:30.084 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:30.084 Test: blockdev write read max offset ...passed 00:23:30.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:30.341 Test: blockdev writev readv 8 blocks ...passed 00:23:30.341 Test: blockdev writev readv 30 x 1block ...passed 00:23:30.341 Test: blockdev writev readv block ...passed 00:23:30.341 Test: blockdev writev readv size > 128k ...passed 00:23:30.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:30.341 Test: blockdev comparev and writev ...[2024-04-18 11:14:38.414466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.414963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.415343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.416191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.416435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.416522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.417341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.417444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.417565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.417856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.418462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.418850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.419416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.341 [2024-04-18 11:14:38.419800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.341 passed 00:23:30.341 Test: blockdev nvme passthru rw ...passed 00:23:30.341 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:14:38.503797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.341 [2024-04-18 11:14:38.504591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.504992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.341 [2024-04-18 11:14:38.505397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.506163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.341 [2024-04-18 11:14:38.506279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.341 [2024-04-18 11:14:38.506540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.341 [2024-04-18 11:14:38.506830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:23:30.341 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:23:30.341 passed 00:23:30.600 Test: blockdev copy ...passed 00:23:30.600 00:23:30.600 Run Summary: Type Total Ran Passed Failed Inactive 00:23:30.600 suites 1 1 n/a 0 0 00:23:30.600 tests 23 23 23 0 0 00:23:30.600 asserts 152 152 152 0 n/a 00:23:30.600 00:23:30.600 Elapsed time = 1.085 seconds 00:23:31.533 11:14:39 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.533 11:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.533 11:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:31.533 11:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.533 11:14:39 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:31.533 11:14:39 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:31.533 11:14:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:31.533 11:14:39 -- nvmf/common.sh@117 -- # sync 00:23:31.790 11:14:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.790 11:14:39 -- nvmf/common.sh@120 -- # set +e 00:23:31.790 11:14:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.791 11:14:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.791 rmmod nvme_tcp 00:23:31.791 rmmod nvme_fabrics 00:23:31.791 rmmod nvme_keyring 00:23:31.791 11:14:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.791 11:14:39 -- nvmf/common.sh@124 -- # set -e 00:23:31.791 11:14:39 -- nvmf/common.sh@125 -- # return 0 00:23:31.791 11:14:39 -- nvmf/common.sh@478 -- # '[' -n 78165 ']' 00:23:31.791 11:14:39 -- nvmf/common.sh@479 -- # killprocess 78165 00:23:31.791 11:14:39 -- common/autotest_common.sh@936 -- # '[' -z 78165 ']' 00:23:31.791 11:14:39 -- common/autotest_common.sh@940 -- # kill -0 78165 00:23:31.791 11:14:39 -- common/autotest_common.sh@941 -- # uname 00:23:31.791 11:14:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.791 11:14:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78165 00:23:31.791 killing process with pid 78165 00:23:31.791 11:14:39 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:23:31.791 11:14:39 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:23:31.791 11:14:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78165' 00:23:31.791 11:14:39 -- common/autotest_common.sh@955 -- # kill 78165 00:23:31.791 11:14:39 -- common/autotest_common.sh@960 -- # wait 78165 00:23:33.162 11:14:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:33.162 11:14:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:33.162 11:14:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:33.162 11:14:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.162 11:14:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.162 11:14:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.162 11:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.162 11:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.162 11:14:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:33.162 ************************************ 00:23:33.162 END TEST nvmf_bdevio 00:23:33.162 ************************************ 00:23:33.162 00:23:33.162 real 0m5.892s 00:23:33.162 user 0m23.284s 00:23:33.162 sys 0m1.121s 00:23:33.162 11:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.162 11:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:33.162 11:14:41 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:23:33.162 11:14:41 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.162 11:14:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:33.162 11:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.162 11:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:33.420 ************************************ 00:23:33.420 START TEST nvmf_bdevio_no_huge 00:23:33.420 ************************************ 00:23:33.420 11:14:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.420 * Looking for test storage... 00:23:33.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:33.420 11:14:41 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.420 11:14:41 -- nvmf/common.sh@7 -- # uname -s 00:23:33.420 11:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.420 11:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.420 11:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.420 11:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.420 11:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.420 11:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.420 11:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.420 11:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.420 11:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.420 11:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.420 11:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:33.420 11:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:33.420 11:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.420 11:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.420 11:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.420 11:14:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.420 11:14:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.420 11:14:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.420 11:14:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.420 11:14:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.420 11:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.421 11:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.421 11:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.421 11:14:41 -- paths/export.sh@5 -- # export PATH 00:23:33.421 11:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.421 11:14:41 -- nvmf/common.sh@47 -- # : 0 00:23:33.421 11:14:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.421 11:14:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.421 11:14:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.421 11:14:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.421 11:14:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.421 11:14:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.421 11:14:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.421 11:14:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.421 11:14:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.421 11:14:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.421 11:14:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:33.421 11:14:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:33.421 11:14:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.421 11:14:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:33.421 11:14:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:33.421 11:14:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:33.421 11:14:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.421 11:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.421 11:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.421 11:14:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:33.421 11:14:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:33.421 11:14:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:33.421 11:14:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:33.421 11:14:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:33.421 11:14:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:33.421 11:14:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.421 11:14:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.421 11:14:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:33.421 11:14:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:33.421 11:14:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.421 11:14:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.421 11:14:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.421 11:14:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.421 11:14:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.421 11:14:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.421 11:14:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.421 11:14:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.421 11:14:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:33.421 11:14:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:33.421 Cannot find device "nvmf_tgt_br" 00:23:33.421 11:14:41 -- nvmf/common.sh@155 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.421 Cannot find device "nvmf_tgt_br2" 00:23:33.421 11:14:41 -- nvmf/common.sh@156 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:33.421 11:14:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:33.421 Cannot find device "nvmf_tgt_br" 00:23:33.421 11:14:41 -- nvmf/common.sh@158 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:33.421 Cannot find device "nvmf_tgt_br2" 00:23:33.421 11:14:41 -- nvmf/common.sh@159 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:33.421 11:14:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:33.421 11:14:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.421 11:14:41 -- nvmf/common.sh@162 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.421 11:14:41 -- nvmf/common.sh@163 -- # true 00:23:33.421 11:14:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.421 11:14:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.421 11:14:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.421 11:14:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.741 11:14:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.741 11:14:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.741 11:14:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.741 11:14:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:33.741 11:14:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:33.741 11:14:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:33.741 11:14:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:33.741 11:14:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:33.741 11:14:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:33.741 11:14:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:33.741 11:14:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:33.741 11:14:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:33.741 11:14:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:33.741 11:14:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:33.741 11:14:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:33.741 11:14:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:33.741 11:14:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:33.741 11:14:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:33.741 11:14:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:33.741 11:14:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:33.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:23:33.741 00:23:33.741 --- 10.0.0.2 ping statistics --- 00:23:33.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.741 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:33.741 11:14:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:33.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:33.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:23:33.741 00:23:33.741 --- 10.0.0.3 ping statistics --- 00:23:33.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.741 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:33.741 11:14:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:33.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:23:33.741 00:23:33.741 --- 10.0.0.1 ping statistics --- 00:23:33.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.741 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:33.741 11:14:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.741 11:14:41 -- nvmf/common.sh@422 -- # return 0 00:23:33.741 11:14:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:33.741 11:14:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.741 11:14:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:33.741 11:14:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:33.741 11:14:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.741 11:14:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:33.741 11:14:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:33.741 11:14:41 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:33.741 11:14:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:33.741 11:14:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:33.741 11:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:33.741 11:14:41 -- nvmf/common.sh@470 -- # nvmfpid=78455 00:23:33.741 11:14:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:33.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.741 11:14:41 -- nvmf/common.sh@471 -- # waitforlisten 78455 00:23:33.741 11:14:41 -- common/autotest_common.sh@817 -- # '[' -z 78455 ']' 00:23:33.741 11:14:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.741 11:14:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:33.741 11:14:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.741 11:14:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:33.741 11:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:33.998 [2024-04-18 11:14:41.956935] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:33.998 [2024-04-18 11:14:41.957379] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:33.998 [2024-04-18 11:14:42.150352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.256 [2024-04-18 11:14:42.454447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.256 [2024-04-18 11:14:42.454744] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.256 [2024-04-18 11:14:42.454962] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.256 [2024-04-18 11:14:42.455087] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.256 [2024-04-18 11:14:42.455127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.256 [2024-04-18 11:14:42.455297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.256 [2024-04-18 11:14:42.455376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:34.256 [2024-04-18 11:14:42.455502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.256 [2024-04-18 11:14:42.455500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:34.822 11:14:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:34.822 11:14:42 -- common/autotest_common.sh@850 -- # return 0 00:23:34.822 11:14:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:34.822 11:14:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 11:14:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.822 11:14:42 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.822 11:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 [2024-04-18 11:14:42.875811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.822 11:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.822 11:14:42 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.822 11:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 Malloc0 00:23:34.822 11:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.822 11:14:42 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.822 11:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 11:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.822 11:14:42 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.822 11:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 11:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.822 11:14:42 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.822 11:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.822 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.822 [2024-04-18 11:14:42.964830] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.822 11:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.822 11:14:42 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:34.822 11:14:42 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:34.822 11:14:42 -- nvmf/common.sh@521 -- # config=() 00:23:34.822 11:14:42 -- nvmf/common.sh@521 -- # local subsystem config 00:23:34.822 11:14:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:34.822 11:14:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:34.822 { 00:23:34.822 "params": { 00:23:34.822 "name": "Nvme$subsystem", 00:23:34.822 "trtype": "$TEST_TRANSPORT", 00:23:34.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.822 "adrfam": "ipv4", 00:23:34.822 "trsvcid": "$NVMF_PORT", 00:23:34.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.822 "hdgst": ${hdgst:-false}, 00:23:34.822 "ddgst": ${ddgst:-false} 00:23:34.822 }, 00:23:34.822 "method": "bdev_nvme_attach_controller" 00:23:34.822 } 00:23:34.822 EOF 00:23:34.822 )") 00:23:34.822 11:14:42 -- nvmf/common.sh@543 -- # cat 00:23:34.822 11:14:42 -- nvmf/common.sh@545 -- # jq . 00:23:34.822 11:14:42 -- nvmf/common.sh@546 -- # IFS=, 00:23:34.822 11:14:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:34.822 "params": { 00:23:34.822 "name": "Nvme1", 00:23:34.822 "trtype": "tcp", 00:23:34.822 "traddr": "10.0.0.2", 00:23:34.822 "adrfam": "ipv4", 00:23:34.822 "trsvcid": "4420", 00:23:34.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.822 "hdgst": false, 00:23:34.822 "ddgst": false 00:23:34.822 }, 00:23:34.822 "method": "bdev_nvme_attach_controller" 00:23:34.822 }' 00:23:35.080 [2024-04-18 11:14:43.066097] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:35.080 [2024-04-18 11:14:43.066293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid78515 ] 00:23:35.080 [2024-04-18 11:14:43.289861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.339 [2024-04-18 11:14:43.544842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.339 [2024-04-18 11:14:43.544916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.339 [2024-04-18 11:14:43.544928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.904 I/O targets: 00:23:35.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:35.904 00:23:35.904 00:23:35.904 CUnit - A unit testing framework for C - Version 2.1-3 00:23:35.904 http://cunit.sourceforge.net/ 00:23:35.904 00:23:35.904 00:23:35.904 Suite: bdevio tests on: Nvme1n1 00:23:35.904 Test: blockdev write read block ...passed 00:23:35.904 Test: blockdev write zeroes read block ...passed 00:23:35.904 Test: blockdev write zeroes read no split ...passed 00:23:35.904 Test: blockdev write zeroes read split ...passed 00:23:35.904 Test: blockdev write zeroes read split partial ...passed 00:23:35.904 Test: blockdev reset ...[2024-04-18 11:14:44.103854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.904 [2024-04-18 11:14:44.104343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:23:35.904 [2024-04-18 11:14:44.116340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.904 passed 00:23:35.904 Test: blockdev write read 8 blocks ...passed 00:23:35.904 Test: blockdev write read size > 128k ...passed 00:23:35.904 Test: blockdev write read invalid size ...passed 00:23:36.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:36.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:36.162 Test: blockdev write read max offset ...passed 00:23:36.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:36.162 Test: blockdev writev readv 8 blocks ...passed 00:23:36.162 Test: blockdev writev readv 30 x 1block ...passed 00:23:36.162 Test: blockdev writev readv block ...passed 00:23:36.162 Test: blockdev writev readv size > 128k ...passed 00:23:36.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:36.162 Test: blockdev comparev and writev ...[2024-04-18 11:14:44.300896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.301025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.301091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.301800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.301847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.301876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.301893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.302438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.302482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.302511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.302527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.302950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.302981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.162 [2024-04-18 11:14:44.303007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.162 [2024-04-18 11:14:44.303022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.162 passed 00:23:36.421 Test: blockdev nvme passthru rw ...passed 00:23:36.421 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:14:44.387944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.421 [2024-04-18 11:14:44.388026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.421 [2024-04-18 11:14:44.388295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.421 [2024-04-18 11:14:44.388323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.421 [2024-04-18 11:14:44.388822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.421 [2024-04-18 11:14:44.388868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.421 [2024-04-18 11:14:44.389041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.421 [2024-04-18 11:14:44.389078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.421 passed 00:23:36.421 Test: blockdev nvme admin passthru ...passed 00:23:36.421 Test: blockdev copy ...passed 00:23:36.421 00:23:36.421 Run Summary: Type Total Ran Passed Failed Inactive 00:23:36.421 suites 1 1 n/a 0 0 00:23:36.421 tests 23 23 23 0 0 00:23:36.421 asserts 152 152 152 0 n/a 00:23:36.421 00:23:36.421 Elapsed time = 1.008 seconds 00:23:36.986 11:14:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.986 11:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.986 11:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:36.986 11:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.986 11:14:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:36.986 11:14:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:36.986 11:14:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:36.986 11:14:45 -- nvmf/common.sh@117 -- # sync 00:23:37.244 11:14:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.244 11:14:45 -- nvmf/common.sh@120 -- # set +e 00:23:37.244 11:14:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.244 11:14:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.244 rmmod nvme_tcp 00:23:37.244 rmmod nvme_fabrics 00:23:37.244 rmmod nvme_keyring 00:23:37.244 11:14:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.244 11:14:45 -- nvmf/common.sh@124 -- # set -e 00:23:37.244 11:14:45 -- nvmf/common.sh@125 -- # return 0 00:23:37.244 11:14:45 -- nvmf/common.sh@478 -- # '[' -n 78455 ']' 00:23:37.244 11:14:45 -- nvmf/common.sh@479 -- # killprocess 78455 00:23:37.244 11:14:45 -- common/autotest_common.sh@936 -- # '[' -z 78455 ']' 00:23:37.244 11:14:45 -- common/autotest_common.sh@940 -- # kill -0 78455 00:23:37.245 11:14:45 -- common/autotest_common.sh@941 -- # uname 00:23:37.245 11:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.245 11:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78455 00:23:37.245 killing process with pid 78455 00:23:37.245 11:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:23:37.245 11:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:23:37.245 11:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78455' 00:23:37.245 11:14:45 -- common/autotest_common.sh@955 -- # kill 78455 00:23:37.245 11:14:45 -- common/autotest_common.sh@960 -- # wait 78455 00:23:38.179 11:14:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:38.179 11:14:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:38.179 11:14:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:38.179 11:14:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.179 11:14:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.179 11:14:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.179 11:14:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.179 11:14:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.179 11:14:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:38.179 00:23:38.179 real 0m4.778s 00:23:38.179 user 0m17.795s 00:23:38.179 sys 0m1.502s 00:23:38.179 11:14:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:38.179 11:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 ************************************ 00:23:38.179 END TEST nvmf_bdevio_no_huge 00:23:38.179 ************************************ 00:23:38.179 11:14:46 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:38.179 11:14:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:38.179 11:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.179 11:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 ************************************ 00:23:38.179 START TEST nvmf_tls 00:23:38.179 ************************************ 00:23:38.179 11:14:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:38.179 * Looking for test storage... 00:23:38.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:38.179 11:14:46 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:38.179 11:14:46 -- nvmf/common.sh@7 -- # uname -s 00:23:38.179 11:14:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.179 11:14:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.179 11:14:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.179 11:14:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.179 11:14:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.179 11:14:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.179 11:14:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.179 11:14:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.179 11:14:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.179 11:14:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.179 11:14:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:38.179 11:14:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:23:38.179 11:14:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.179 11:14:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.179 11:14:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:38.179 11:14:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.179 11:14:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:38.179 11:14:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.179 11:14:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.179 11:14:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.179 11:14:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.180 11:14:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.180 11:14:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.180 11:14:46 -- paths/export.sh@5 -- # export PATH 00:23:38.180 11:14:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.180 11:14:46 -- nvmf/common.sh@47 -- # : 0 00:23:38.180 11:14:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.180 11:14:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.180 11:14:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.180 11:14:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.180 11:14:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.180 11:14:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.180 11:14:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.180 11:14:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.180 11:14:46 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:38.180 11:14:46 -- target/tls.sh@62 -- # nvmftestinit 00:23:38.180 11:14:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:38.180 11:14:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.180 11:14:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:38.180 11:14:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:38.180 11:14:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:38.180 11:14:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.180 11:14:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.180 11:14:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.438 11:14:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:38.438 11:14:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:38.438 11:14:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:38.438 11:14:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:38.439 11:14:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:38.439 11:14:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:38.439 11:14:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.439 11:14:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.439 11:14:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:38.439 11:14:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:38.439 11:14:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:38.439 11:14:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:38.439 11:14:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:38.439 11:14:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.439 11:14:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:38.439 11:14:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:38.439 11:14:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:38.439 11:14:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:38.439 11:14:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:38.439 11:14:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:38.439 Cannot find device "nvmf_tgt_br" 00:23:38.439 11:14:46 -- nvmf/common.sh@155 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.439 Cannot find device "nvmf_tgt_br2" 00:23:38.439 11:14:46 -- nvmf/common.sh@156 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:38.439 11:14:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:38.439 Cannot find device "nvmf_tgt_br" 00:23:38.439 11:14:46 -- nvmf/common.sh@158 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:38.439 Cannot find device "nvmf_tgt_br2" 00:23:38.439 11:14:46 -- nvmf/common.sh@159 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:38.439 11:14:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:38.439 11:14:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.439 11:14:46 -- nvmf/common.sh@162 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.439 11:14:46 -- nvmf/common.sh@163 -- # true 00:23:38.439 11:14:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:38.439 11:14:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:38.439 11:14:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:38.439 11:14:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:38.439 11:14:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:38.439 11:14:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:38.439 11:14:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:38.439 11:14:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:38.439 11:14:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:38.439 11:14:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:38.439 11:14:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:38.439 11:14:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:38.439 11:14:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:38.439 11:14:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:38.439 11:14:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:38.439 11:14:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:38.439 11:14:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:38.439 11:14:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:38.697 11:14:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:38.697 11:14:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:38.697 11:14:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:38.697 11:14:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:38.697 11:14:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:38.697 11:14:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:38.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:38.697 00:23:38.697 --- 10.0.0.2 ping statistics --- 00:23:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.697 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:38.697 11:14:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:38.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:38.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:38.697 00:23:38.697 --- 10.0.0.3 ping statistics --- 00:23:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.697 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:38.697 11:14:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:38.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:23:38.697 00:23:38.697 --- 10.0.0.1 ping statistics --- 00:23:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.697 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:38.697 11:14:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.697 11:14:46 -- nvmf/common.sh@422 -- # return 0 00:23:38.697 11:14:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:38.697 11:14:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.697 11:14:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:38.697 11:14:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:38.697 11:14:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.697 11:14:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:38.697 11:14:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:38.697 11:14:46 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:38.697 11:14:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:38.697 11:14:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:38.697 11:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:38.697 11:14:46 -- nvmf/common.sh@470 -- # nvmfpid=78742 00:23:38.697 11:14:46 -- nvmf/common.sh@471 -- # waitforlisten 78742 00:23:38.697 11:14:46 -- common/autotest_common.sh@817 -- # '[' -z 78742 ']' 00:23:38.697 11:14:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.697 11:14:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:38.697 11:14:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:38.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.697 11:14:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.697 11:14:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:38.697 11:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:38.697 [2024-04-18 11:14:46.873298] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:38.698 [2024-04-18 11:14:46.873508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.955 [2024-04-18 11:14:47.051354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.213 [2024-04-18 11:14:47.346939] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.213 [2024-04-18 11:14:47.347021] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.213 [2024-04-18 11:14:47.347042] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.213 [2024-04-18 11:14:47.347068] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.213 [2024-04-18 11:14:47.347083] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.213 [2024-04-18 11:14:47.347138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.778 11:14:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:39.778 11:14:47 -- common/autotest_common.sh@850 -- # return 0 00:23:39.778 11:14:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:39.778 11:14:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:39.778 11:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:39.778 11:14:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.778 11:14:47 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:39.778 11:14:47 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:40.038 true 00:23:40.038 11:14:48 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.038 11:14:48 -- target/tls.sh@73 -- # jq -r .tls_version 00:23:40.610 11:14:48 -- target/tls.sh@73 -- # version=0 00:23:40.610 11:14:48 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:40.610 11:14:48 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:40.610 11:14:48 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.610 11:14:48 -- target/tls.sh@81 -- # jq -r .tls_version 00:23:41.177 11:14:49 -- target/tls.sh@81 -- # version=13 00:23:41.177 11:14:49 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:41.177 11:14:49 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:41.435 11:14:49 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.435 11:14:49 -- target/tls.sh@89 -- # jq -r .tls_version 00:23:41.694 11:14:49 -- target/tls.sh@89 -- # version=7 00:23:41.694 11:14:49 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:41.694 11:14:49 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:41.694 11:14:49 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.952 11:14:50 -- target/tls.sh@96 -- # ktls=false 00:23:41.952 11:14:50 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:41.952 11:14:50 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:42.210 11:14:50 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:42.210 11:14:50 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.468 11:14:50 -- target/tls.sh@104 -- # ktls=true 00:23:42.468 11:14:50 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:42.468 11:14:50 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:42.726 11:14:50 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.726 11:14:50 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:42.984 11:14:51 -- target/tls.sh@112 -- # ktls=false 00:23:42.984 11:14:51 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:42.984 11:14:51 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:42.984 11:14:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:42.984 11:14:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # digest=1 00:23:42.984 11:14:51 -- nvmf/common.sh@694 -- # python - 00:23:42.984 11:14:51 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:42.984 11:14:51 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:42.984 11:14:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:42.984 11:14:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:23:42.984 11:14:51 -- nvmf/common.sh@693 -- # digest=1 00:23:42.984 11:14:51 -- nvmf/common.sh@694 -- # python - 00:23:42.984 11:14:51 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:42.984 11:14:51 -- target/tls.sh@121 -- # mktemp 00:23:42.984 11:14:51 -- target/tls.sh@121 -- # key_path=/tmp/tmp.itFYbR4acc 00:23:42.984 11:14:51 -- target/tls.sh@122 -- # mktemp 00:23:42.984 11:14:51 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7JNyQs2MoJ 00:23:42.984 11:14:51 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:42.984 11:14:51 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:42.984 11:14:51 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.itFYbR4acc 00:23:42.984 11:14:51 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7JNyQs2MoJ 00:23:42.984 11:14:51 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:43.552 11:14:51 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:44.121 11:14:52 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.itFYbR4acc 00:23:44.121 11:14:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.itFYbR4acc 00:23:44.121 11:14:52 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:44.121 [2024-04-18 11:14:52.332609] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.379 11:14:52 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.638 11:14:52 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.896 [2024-04-18 11:14:52.876753] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.896 [2024-04-18 11:14:52.877075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.896 11:14:52 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:45.156 malloc0 00:23:45.156 11:14:53 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:45.413 11:14:53 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itFYbR4acc 00:23:45.672 [2024-04-18 11:14:53.802501] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.672 11:14:53 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.itFYbR4acc 00:23:57.889 Initializing NVMe Controllers 00:23:57.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.889 Initialization complete. Launching workers. 00:23:57.889 ======================================================== 00:23:57.889 Latency(us) 00:23:57.889 Device Information : IOPS MiB/s Average min max 00:23:57.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6459.68 25.23 9911.22 2096.34 14522.31 00:23:57.889 ======================================================== 00:23:57.889 Total : 6459.68 25.23 9911.22 2096.34 14522.31 00:23:57.889 00:23:57.889 11:15:04 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itFYbR4acc 00:23:57.889 11:15:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:57.889 11:15:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:57.889 11:15:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:57.889 11:15:04 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itFYbR4acc' 00:23:57.889 11:15:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:57.889 11:15:04 -- target/tls.sh@28 -- # bdevperf_pid=79113 00:23:57.889 11:15:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:57.889 11:15:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.889 11:15:04 -- target/tls.sh@31 -- # waitforlisten 79113 /var/tmp/bdevperf.sock 00:23:57.889 11:15:04 -- common/autotest_common.sh@817 -- # '[' -z 79113 ']' 00:23:57.889 11:15:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.889 11:15:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:57.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.889 11:15:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.889 11:15:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:57.889 11:15:04 -- common/autotest_common.sh@10 -- # set +x 00:23:57.889 [2024-04-18 11:15:04.260955] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:57.889 [2024-04-18 11:15:04.261188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79113 ] 00:23:57.889 [2024-04-18 11:15:04.432564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.889 [2024-04-18 11:15:04.738365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.889 11:15:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:57.889 11:15:05 -- common/autotest_common.sh@850 -- # return 0 00:23:57.889 11:15:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itFYbR4acc 00:23:57.889 [2024-04-18 11:15:05.478338] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.889 [2024-04-18 11:15:05.478511] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.889 TLSTESTn1 00:23:57.889 11:15:05 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:57.889 Running I/O for 10 seconds... 00:24:07.856 00:24:07.856 Latency(us) 00:24:07.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.856 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.856 Verification LBA range: start 0x0 length 0x2000 00:24:07.856 TLSTESTn1 : 10.03 2776.15 10.84 0.00 0.00 45994.77 8817.57 39798.23 00:24:07.856 =================================================================================================================== 00:24:07.856 Total : 2776.15 10.84 0.00 0.00 45994.77 8817.57 39798.23 00:24:07.856 0 00:24:07.856 11:15:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.856 11:15:15 -- target/tls.sh@45 -- # killprocess 79113 00:24:07.856 11:15:15 -- common/autotest_common.sh@936 -- # '[' -z 79113 ']' 00:24:07.856 11:15:15 -- common/autotest_common.sh@940 -- # kill -0 79113 00:24:07.856 11:15:15 -- common/autotest_common.sh@941 -- # uname 00:24:07.856 11:15:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.856 11:15:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79113 00:24:07.856 11:15:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:07.856 11:15:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:07.856 11:15:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79113' 00:24:07.856 killing process with pid 79113 00:24:07.856 11:15:15 -- common/autotest_common.sh@955 -- # kill 79113 00:24:07.856 11:15:15 -- common/autotest_common.sh@960 -- # wait 79113 00:24:07.856 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.856 00:24:07.856 Latency(us) 00:24:07.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.856 =================================================================================================================== 00:24:07.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.856 [2024-04-18 11:15:15.795139] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:08.791 11:15:16 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7JNyQs2MoJ 00:24:08.791 11:15:16 -- common/autotest_common.sh@638 -- # local es=0 00:24:08.791 11:15:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7JNyQs2MoJ 00:24:08.791 11:15:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:08.791 11:15:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:08.791 11:15:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.791 11:15:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:08.791 11:15:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7JNyQs2MoJ 00:24:08.791 11:15:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:08.791 11:15:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:08.791 11:15:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:08.791 11:15:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7JNyQs2MoJ' 00:24:08.791 11:15:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.791 11:15:16 -- target/tls.sh@28 -- # bdevperf_pid=79277 00:24:08.791 11:15:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.791 11:15:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.791 11:15:16 -- target/tls.sh@31 -- # waitforlisten 79277 /var/tmp/bdevperf.sock 00:24:08.791 11:15:16 -- common/autotest_common.sh@817 -- # '[' -z 79277 ']' 00:24:08.791 11:15:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.791 11:15:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.791 11:15:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.791 11:15:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.792 11:15:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 [2024-04-18 11:15:17.068276] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:09.050 [2024-04-18 11:15:17.068431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79277 ] 00:24:09.050 [2024-04-18 11:15:17.235039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.308 [2024-04-18 11:15:17.475411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.875 11:15:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.875 11:15:18 -- common/autotest_common.sh@850 -- # return 0 00:24:09.875 11:15:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7JNyQs2MoJ 00:24:10.133 [2024-04-18 11:15:18.285673] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.133 [2024-04-18 11:15:18.285842] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:10.133 [2024-04-18 11:15:18.300648] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.133 [2024-04-18 11:15:18.300991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:24:10.133 [2024-04-18 11:15:18.301964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:24:10.133 [2024-04-18 11:15:18.302952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.133 [2024-04-18 11:15:18.302996] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:10.133 [2024-04-18 11:15:18.303017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.133 2024/04/18 11:15:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.7JNyQs2MoJ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:10.133 request: 00:24:10.133 { 00:24:10.133 "method": "bdev_nvme_attach_controller", 00:24:10.133 "params": { 00:24:10.134 "name": "TLSTEST", 00:24:10.134 "trtype": "tcp", 00:24:10.134 "traddr": "10.0.0.2", 00:24:10.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.134 "adrfam": "ipv4", 00:24:10.134 "trsvcid": "4420", 00:24:10.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.134 "psk": "/tmp/tmp.7JNyQs2MoJ" 00:24:10.134 } 00:24:10.134 } 00:24:10.134 Got JSON-RPC error response 00:24:10.134 GoRPCClient: error on JSON-RPC call 00:24:10.134 11:15:18 -- target/tls.sh@36 -- # killprocess 79277 00:24:10.134 11:15:18 -- common/autotest_common.sh@936 -- # '[' -z 79277 ']' 00:24:10.134 11:15:18 -- common/autotest_common.sh@940 -- # kill -0 79277 00:24:10.134 11:15:18 -- common/autotest_common.sh@941 -- # uname 00:24:10.134 11:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:10.134 11:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79277 00:24:10.443 killing process with pid 79277 00:24:10.443 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.443 00:24:10.443 Latency(us) 00:24:10.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.443 =================================================================================================================== 00:24:10.443 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:10.443 11:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:10.443 11:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:10.443 11:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79277' 00:24:10.443 11:15:18 -- common/autotest_common.sh@955 -- # kill 79277 00:24:10.443 11:15:18 -- common/autotest_common.sh@960 -- # wait 79277 00:24:10.443 [2024-04-18 11:15:18.355618] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.375 11:15:19 -- target/tls.sh@37 -- # return 1 00:24:11.375 11:15:19 -- common/autotest_common.sh@641 -- # es=1 00:24:11.375 11:15:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:11.375 11:15:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:11.375 11:15:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:11.375 11:15:19 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itFYbR4acc 00:24:11.375 11:15:19 -- common/autotest_common.sh@638 -- # local es=0 00:24:11.375 11:15:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itFYbR4acc 00:24:11.375 11:15:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:11.375 11:15:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:11.375 11:15:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:11.375 11:15:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:11.375 11:15:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itFYbR4acc 00:24:11.375 11:15:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.375 11:15:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.375 11:15:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:11.375 11:15:19 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itFYbR4acc' 00:24:11.375 11:15:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.375 11:15:19 -- target/tls.sh@28 -- # bdevperf_pid=79329 00:24:11.375 11:15:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.375 11:15:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.375 11:15:19 -- target/tls.sh@31 -- # waitforlisten 79329 /var/tmp/bdevperf.sock 00:24:11.375 11:15:19 -- common/autotest_common.sh@817 -- # '[' -z 79329 ']' 00:24:11.375 11:15:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.375 11:15:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.375 11:15:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.375 11:15:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.375 11:15:19 -- common/autotest_common.sh@10 -- # set +x 00:24:11.633 [2024-04-18 11:15:19.604656] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:11.633 [2024-04-18 11:15:19.605372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79329 ] 00:24:11.633 [2024-04-18 11:15:19.776257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.891 [2024-04-18 11:15:20.069839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.457 11:15:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.457 11:15:20 -- common/autotest_common.sh@850 -- # return 0 00:24:12.457 11:15:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.itFYbR4acc 00:24:12.716 [2024-04-18 11:15:20.765179] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.716 [2024-04-18 11:15:20.765363] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:12.716 [2024-04-18 11:15:20.775407] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:12.716 [2024-04-18 11:15:20.775470] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:12.716 [2024-04-18 11:15:20.775549] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:12.716 [2024-04-18 11:15:20.776508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:24:12.716 [2024-04-18 11:15:20.777476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:24:12.716 [2024-04-18 11:15:20.778463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:12.716 [2024-04-18 11:15:20.778507] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:12.716 [2024-04-18 11:15:20.778529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.716 2024/04/18 11:15:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.itFYbR4acc subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:12.716 request: 00:24:12.716 { 00:24:12.716 "method": "bdev_nvme_attach_controller", 00:24:12.716 "params": { 00:24:12.716 "name": "TLSTEST", 00:24:12.716 "trtype": "tcp", 00:24:12.716 "traddr": "10.0.0.2", 00:24:12.716 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.716 "adrfam": "ipv4", 00:24:12.716 "trsvcid": "4420", 00:24:12.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.716 "psk": "/tmp/tmp.itFYbR4acc" 00:24:12.716 } 00:24:12.716 } 00:24:12.716 Got JSON-RPC error response 00:24:12.716 GoRPCClient: error on JSON-RPC call 00:24:12.716 11:15:20 -- target/tls.sh@36 -- # killprocess 79329 00:24:12.716 11:15:20 -- common/autotest_common.sh@936 -- # '[' -z 79329 ']' 00:24:12.716 11:15:20 -- common/autotest_common.sh@940 -- # kill -0 79329 00:24:12.716 11:15:20 -- common/autotest_common.sh@941 -- # uname 00:24:12.716 11:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:12.716 11:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79329 00:24:12.716 killing process with pid 79329 00:24:12.716 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.716 00:24:12.716 Latency(us) 00:24:12.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.716 =================================================================================================================== 00:24:12.716 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.716 11:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:12.716 11:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:12.716 11:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79329' 00:24:12.716 11:15:20 -- common/autotest_common.sh@955 -- # kill 79329 00:24:12.716 11:15:20 -- common/autotest_common.sh@960 -- # wait 79329 00:24:12.716 [2024-04-18 11:15:20.828304] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:14.091 11:15:21 -- target/tls.sh@37 -- # return 1 00:24:14.091 11:15:21 -- common/autotest_common.sh@641 -- # es=1 00:24:14.091 11:15:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:14.091 11:15:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:14.091 11:15:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:14.091 11:15:21 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itFYbR4acc 00:24:14.091 11:15:21 -- common/autotest_common.sh@638 -- # local es=0 00:24:14.091 11:15:21 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itFYbR4acc 00:24:14.091 11:15:21 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:14.091 11:15:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.091 11:15:22 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.091 11:15:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.091 11:15:22 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itFYbR4acc 00:24:14.091 11:15:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.091 11:15:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:14.091 11:15:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.091 11:15:22 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itFYbR4acc' 00:24:14.091 11:15:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.091 11:15:22 -- target/tls.sh@28 -- # bdevperf_pid=79387 00:24:14.091 11:15:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.091 11:15:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.091 11:15:22 -- target/tls.sh@31 -- # waitforlisten 79387 /var/tmp/bdevperf.sock 00:24:14.091 11:15:22 -- common/autotest_common.sh@817 -- # '[' -z 79387 ']' 00:24:14.091 11:15:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.091 11:15:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:14.091 11:15:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.091 11:15:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:14.091 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:24:14.091 [2024-04-18 11:15:22.105048] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:14.091 [2024-04-18 11:15:22.105289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79387 ] 00:24:14.091 [2024-04-18 11:15:22.276925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.658 [2024-04-18 11:15:22.590676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.917 11:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:14.917 11:15:23 -- common/autotest_common.sh@850 -- # return 0 00:24:14.917 11:15:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itFYbR4acc 00:24:15.175 [2024-04-18 11:15:23.346753] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.175 [2024-04-18 11:15:23.346945] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:15.175 [2024-04-18 11:15:23.361868] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.175 [2024-04-18 11:15:23.361953] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.175 [2024-04-18 11:15:23.362046] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:15.175 [2024-04-18 11:15:23.362122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:24:15.175 [2024-04-18 11:15:23.363081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:24:15.175 [2024-04-18 11:15:23.364068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.175 [2024-04-18 11:15:23.364134] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:15.175 [2024-04-18 11:15:23.364164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.175 2024/04/18 11:15:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.itFYbR4acc subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:15.175 request: 00:24:15.175 { 00:24:15.175 "method": "bdev_nvme_attach_controller", 00:24:15.175 "params": { 00:24:15.175 "name": "TLSTEST", 00:24:15.175 "trtype": "tcp", 00:24:15.175 "traddr": "10.0.0.2", 00:24:15.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.175 "adrfam": "ipv4", 00:24:15.175 "trsvcid": "4420", 00:24:15.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.175 "psk": "/tmp/tmp.itFYbR4acc" 00:24:15.175 } 00:24:15.175 } 00:24:15.175 Got JSON-RPC error response 00:24:15.175 GoRPCClient: error on JSON-RPC call 00:24:15.175 11:15:23 -- target/tls.sh@36 -- # killprocess 79387 00:24:15.175 11:15:23 -- common/autotest_common.sh@936 -- # '[' -z 79387 ']' 00:24:15.175 11:15:23 -- common/autotest_common.sh@940 -- # kill -0 79387 00:24:15.175 11:15:23 -- common/autotest_common.sh@941 -- # uname 00:24:15.433 11:15:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.433 11:15:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79387 00:24:15.433 killing process with pid 79387 00:24:15.433 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.433 00:24:15.433 Latency(us) 00:24:15.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.433 =================================================================================================================== 00:24:15.433 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.433 11:15:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:15.433 11:15:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:15.433 11:15:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79387' 00:24:15.433 11:15:23 -- common/autotest_common.sh@955 -- # kill 79387 00:24:15.433 [2024-04-18 11:15:23.422921] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:15.433 11:15:23 -- common/autotest_common.sh@960 -- # wait 79387 00:24:16.367 11:15:24 -- target/tls.sh@37 -- # return 1 00:24:16.367 11:15:24 -- common/autotest_common.sh@641 -- # es=1 00:24:16.367 11:15:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:16.367 11:15:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:16.367 11:15:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:16.367 11:15:24 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.367 11:15:24 -- common/autotest_common.sh@638 -- # local es=0 00:24:16.367 11:15:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.367 11:15:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:16.367 11:15:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:16.367 11:15:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:16.367 11:15:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:16.367 11:15:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.367 11:15:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:16.367 11:15:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:16.367 11:15:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:16.367 11:15:24 -- target/tls.sh@23 -- # psk= 00:24:16.367 11:15:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.367 11:15:24 -- target/tls.sh@28 -- # bdevperf_pid=79439 00:24:16.367 11:15:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.367 11:15:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.367 11:15:24 -- target/tls.sh@31 -- # waitforlisten 79439 /var/tmp/bdevperf.sock 00:24:16.367 11:15:24 -- common/autotest_common.sh@817 -- # '[' -z 79439 ']' 00:24:16.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.367 11:15:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.367 11:15:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:16.367 11:15:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.367 11:15:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:16.367 11:15:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.625 [2024-04-18 11:15:24.682642] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:16.625 [2024-04-18 11:15:24.682822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79439 ] 00:24:16.884 [2024-04-18 11:15:24.854828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.884 [2024-04-18 11:15:25.101989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.450 11:15:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:17.450 11:15:25 -- common/autotest_common.sh@850 -- # return 0 00:24:17.450 11:15:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:17.709 [2024-04-18 11:15:25.922905] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:17.709 [2024-04-18 11:15:25.925120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:24:17.709 [2024-04-18 11:15:25.926094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.709 [2024-04-18 11:15:25.926170] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:17.709 [2024-04-18 11:15:25.926193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.709 2024/04/18 11:15:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:17.968 request: 00:24:17.968 { 00:24:17.968 "method": "bdev_nvme_attach_controller", 00:24:17.968 "params": { 00:24:17.968 "name": "TLSTEST", 00:24:17.968 "trtype": "tcp", 00:24:17.968 "traddr": "10.0.0.2", 00:24:17.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.968 "adrfam": "ipv4", 00:24:17.968 "trsvcid": "4420", 00:24:17.968 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:24:17.968 } 00:24:17.968 } 00:24:17.968 Got JSON-RPC error response 00:24:17.968 GoRPCClient: error on JSON-RPC call 00:24:17.968 11:15:25 -- target/tls.sh@36 -- # killprocess 79439 00:24:17.968 11:15:25 -- common/autotest_common.sh@936 -- # '[' -z 79439 ']' 00:24:17.968 11:15:25 -- common/autotest_common.sh@940 -- # kill -0 79439 00:24:17.968 11:15:25 -- common/autotest_common.sh@941 -- # uname 00:24:17.968 11:15:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.968 11:15:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79439 00:24:17.968 killing process with pid 79439 00:24:17.968 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.968 00:24:17.968 Latency(us) 00:24:17.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.968 =================================================================================================================== 00:24:17.968 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.968 11:15:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:17.968 11:15:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:17.968 11:15:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79439' 00:24:17.968 11:15:25 -- common/autotest_common.sh@955 -- # kill 79439 00:24:17.968 11:15:25 -- common/autotest_common.sh@960 -- # wait 79439 00:24:19.345 11:15:27 -- target/tls.sh@37 -- # return 1 00:24:19.345 11:15:27 -- common/autotest_common.sh@641 -- # es=1 00:24:19.345 11:15:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:19.345 11:15:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:19.345 11:15:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:19.345 11:15:27 -- target/tls.sh@158 -- # killprocess 78742 00:24:19.345 11:15:27 -- common/autotest_common.sh@936 -- # '[' -z 78742 ']' 00:24:19.345 11:15:27 -- common/autotest_common.sh@940 -- # kill -0 78742 00:24:19.345 11:15:27 -- common/autotest_common.sh@941 -- # uname 00:24:19.345 11:15:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:19.345 11:15:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78742 00:24:19.345 11:15:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:19.345 11:15:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:19.345 killing process with pid 78742 00:24:19.345 11:15:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78742' 00:24:19.345 11:15:27 -- common/autotest_common.sh@955 -- # kill 78742 00:24:19.345 [2024-04-18 11:15:27.165034] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:19.345 11:15:27 -- common/autotest_common.sh@960 -- # wait 78742 00:24:20.347 11:15:28 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:20.347 11:15:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:20.347 11:15:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:20.347 11:15:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:20.347 11:15:28 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:20.347 11:15:28 -- nvmf/common.sh@693 -- # digest=2 00:24:20.347 11:15:28 -- nvmf/common.sh@694 -- # python - 00:24:20.347 11:15:28 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:20.347 11:15:28 -- target/tls.sh@160 -- # mktemp 00:24:20.347 11:15:28 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.mufboOTJKP 00:24:20.347 11:15:28 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:20.347 11:15:28 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.mufboOTJKP 00:24:20.347 11:15:28 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:20.347 11:15:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:20.347 11:15:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:20.347 11:15:28 -- common/autotest_common.sh@10 -- # set +x 00:24:20.347 11:15:28 -- nvmf/common.sh@470 -- # nvmfpid=79524 00:24:20.347 11:15:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.347 11:15:28 -- nvmf/common.sh@471 -- # waitforlisten 79524 00:24:20.347 11:15:28 -- common/autotest_common.sh@817 -- # '[' -z 79524 ']' 00:24:20.347 11:15:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.347 11:15:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.348 11:15:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.348 11:15:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.348 11:15:28 -- common/autotest_common.sh@10 -- # set +x 00:24:20.607 [2024-04-18 11:15:28.640813] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:20.607 [2024-04-18 11:15:28.640993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.607 [2024-04-18 11:15:28.806116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.865 [2024-04-18 11:15:29.046352] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.866 [2024-04-18 11:15:29.046423] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.866 [2024-04-18 11:15:29.046443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.866 [2024-04-18 11:15:29.046469] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.866 [2024-04-18 11:15:29.046487] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.866 [2024-04-18 11:15:29.046530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.431 11:15:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:21.431 11:15:29 -- common/autotest_common.sh@850 -- # return 0 00:24:21.431 11:15:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:21.431 11:15:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:21.431 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.431 11:15:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.431 11:15:29 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:24:21.431 11:15:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.mufboOTJKP 00:24:21.431 11:15:29 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.689 [2024-04-18 11:15:29.886873] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.689 11:15:29 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.947 11:15:30 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.205 [2024-04-18 11:15:30.383046] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.205 [2024-04-18 11:15:30.383374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.205 11:15:30 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.464 malloc0 00:24:22.464 11:15:30 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.723 11:15:30 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:22.982 [2024-04-18 11:15:31.112713] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:22.982 11:15:31 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mufboOTJKP 00:24:22.982 11:15:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:22.982 11:15:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:22.982 11:15:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:22.982 11:15:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mufboOTJKP' 00:24:22.982 11:15:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.982 11:15:31 -- target/tls.sh@28 -- # bdevperf_pid=79627 00:24:22.982 11:15:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:22.982 11:15:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.982 11:15:31 -- target/tls.sh@31 -- # waitforlisten 79627 /var/tmp/bdevperf.sock 00:24:22.982 11:15:31 -- common/autotest_common.sh@817 -- # '[' -z 79627 ']' 00:24:22.982 11:15:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.982 11:15:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:22.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.982 11:15:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.982 11:15:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:22.982 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:24:23.240 [2024-04-18 11:15:31.255936] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:23.240 [2024-04-18 11:15:31.256257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79627 ] 00:24:23.240 [2024-04-18 11:15:31.434647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.499 [2024-04-18 11:15:31.695165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.064 11:15:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:24.064 11:15:32 -- common/autotest_common.sh@850 -- # return 0 00:24:24.064 11:15:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:24.322 [2024-04-18 11:15:32.369383] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.322 [2024-04-18 11:15:32.369547] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:24.322 TLSTESTn1 00:24:24.322 11:15:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:24.580 Running I/O for 10 seconds... 00:24:34.581 00:24:34.581 Latency(us) 00:24:34.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.581 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.581 Verification LBA range: start 0x0 length 0x2000 00:24:34.581 TLSTESTn1 : 10.03 2822.73 11.03 0.00 0.00 45234.45 2636.33 27405.96 00:24:34.581 =================================================================================================================== 00:24:34.581 Total : 2822.73 11.03 0.00 0.00 45234.45 2636.33 27405.96 00:24:34.581 0 00:24:34.581 11:15:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.581 11:15:42 -- target/tls.sh@45 -- # killprocess 79627 00:24:34.581 11:15:42 -- common/autotest_common.sh@936 -- # '[' -z 79627 ']' 00:24:34.581 11:15:42 -- common/autotest_common.sh@940 -- # kill -0 79627 00:24:34.581 11:15:42 -- common/autotest_common.sh@941 -- # uname 00:24:34.581 11:15:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.581 11:15:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79627 00:24:34.581 11:15:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:34.581 11:15:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:34.581 killing process with pid 79627 00:24:34.581 11:15:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79627' 00:24:34.581 11:15:42 -- common/autotest_common.sh@955 -- # kill 79627 00:24:34.581 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.581 00:24:34.581 Latency(us) 00:24:34.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.581 =================================================================================================================== 00:24:34.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.581 [2024-04-18 11:15:42.641469] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:34.581 11:15:42 -- common/autotest_common.sh@960 -- # wait 79627 00:24:35.976 11:15:43 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.mufboOTJKP 00:24:35.976 11:15:43 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mufboOTJKP 00:24:35.976 11:15:43 -- common/autotest_common.sh@638 -- # local es=0 00:24:35.976 11:15:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mufboOTJKP 00:24:35.976 11:15:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:35.976 11:15:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:35.976 11:15:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:35.976 11:15:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:35.976 11:15:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mufboOTJKP 00:24:35.976 11:15:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:35.976 11:15:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:35.976 11:15:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:35.976 11:15:43 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mufboOTJKP' 00:24:35.976 11:15:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.976 11:15:43 -- target/tls.sh@28 -- # bdevperf_pid=79786 00:24:35.976 11:15:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:35.976 11:15:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.976 11:15:43 -- target/tls.sh@31 -- # waitforlisten 79786 /var/tmp/bdevperf.sock 00:24:35.976 11:15:43 -- common/autotest_common.sh@817 -- # '[' -z 79786 ']' 00:24:35.976 11:15:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.976 11:15:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:35.976 11:15:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.976 11:15:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:35.976 11:15:43 -- common/autotest_common.sh@10 -- # set +x 00:24:35.976 [2024-04-18 11:15:43.935859] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:35.976 [2024-04-18 11:15:43.936032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79786 ] 00:24:35.976 [2024-04-18 11:15:44.111037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.234 [2024-04-18 11:15:44.371832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.799 11:15:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:36.799 11:15:44 -- common/autotest_common.sh@850 -- # return 0 00:24:36.799 11:15:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:37.057 [2024-04-18 11:15:45.067631] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.057 [2024-04-18 11:15:45.067718] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:37.057 [2024-04-18 11:15:45.067737] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.mufboOTJKP 00:24:37.057 2024/04/18 11:15:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.mufboOTJKP subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:24:37.057 request: 00:24:37.057 { 00:24:37.057 "method": "bdev_nvme_attach_controller", 00:24:37.057 "params": { 00:24:37.057 "name": "TLSTEST", 00:24:37.057 "trtype": "tcp", 00:24:37.057 "traddr": "10.0.0.2", 00:24:37.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.057 "adrfam": "ipv4", 00:24:37.057 "trsvcid": "4420", 00:24:37.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.057 "psk": "/tmp/tmp.mufboOTJKP" 00:24:37.057 } 00:24:37.057 } 00:24:37.057 Got JSON-RPC error response 00:24:37.057 GoRPCClient: error on JSON-RPC call 00:24:37.057 11:15:45 -- target/tls.sh@36 -- # killprocess 79786 00:24:37.057 11:15:45 -- common/autotest_common.sh@936 -- # '[' -z 79786 ']' 00:24:37.057 11:15:45 -- common/autotest_common.sh@940 -- # kill -0 79786 00:24:37.057 11:15:45 -- common/autotest_common.sh@941 -- # uname 00:24:37.057 11:15:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:37.057 11:15:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79786 00:24:37.057 killing process with pid 79786 00:24:37.057 Received shutdown signal, test time was about 10.000000 seconds 00:24:37.057 00:24:37.057 Latency(us) 00:24:37.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.057 =================================================================================================================== 00:24:37.057 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:37.057 11:15:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:37.057 11:15:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:37.057 11:15:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79786' 00:24:37.057 11:15:45 -- common/autotest_common.sh@955 -- # kill 79786 00:24:37.057 11:15:45 -- common/autotest_common.sh@960 -- # wait 79786 00:24:38.432 11:15:46 -- target/tls.sh@37 -- # return 1 00:24:38.432 11:15:46 -- common/autotest_common.sh@641 -- # es=1 00:24:38.432 11:15:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:38.432 11:15:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:38.432 11:15:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:38.432 11:15:46 -- target/tls.sh@174 -- # killprocess 79524 00:24:38.432 11:15:46 -- common/autotest_common.sh@936 -- # '[' -z 79524 ']' 00:24:38.432 11:15:46 -- common/autotest_common.sh@940 -- # kill -0 79524 00:24:38.432 11:15:46 -- common/autotest_common.sh@941 -- # uname 00:24:38.432 11:15:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.432 11:15:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79524 00:24:38.432 killing process with pid 79524 00:24:38.432 11:15:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:38.432 11:15:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:38.432 11:15:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79524' 00:24:38.432 11:15:46 -- common/autotest_common.sh@955 -- # kill 79524 00:24:38.432 [2024-04-18 11:15:46.269206] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:38.432 11:15:46 -- common/autotest_common.sh@960 -- # wait 79524 00:24:39.366 11:15:47 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:39.366 11:15:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:39.366 11:15:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:39.366 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 11:15:47 -- nvmf/common.sh@470 -- # nvmfpid=79861 00:24:39.366 11:15:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:39.366 11:15:47 -- nvmf/common.sh@471 -- # waitforlisten 79861 00:24:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.366 11:15:47 -- common/autotest_common.sh@817 -- # '[' -z 79861 ']' 00:24:39.366 11:15:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.366 11:15:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.366 11:15:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.366 11:15:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.366 11:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:39.623 [2024-04-18 11:15:47.665291] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:39.623 [2024-04-18 11:15:47.665462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.882 [2024-04-18 11:15:47.866290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.140 [2024-04-18 11:15:48.128640] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.140 [2024-04-18 11:15:48.128699] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.140 [2024-04-18 11:15:48.128719] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.140 [2024-04-18 11:15:48.128744] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.140 [2024-04-18 11:15:48.128760] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.140 [2024-04-18 11:15:48.128802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.398 11:15:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.398 11:15:48 -- common/autotest_common.sh@850 -- # return 0 00:24:40.398 11:15:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:40.398 11:15:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:40.398 11:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:40.398 11:15:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.398 11:15:48 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:24:40.398 11:15:48 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.398 11:15:48 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:24:40.398 11:15:48 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:24:40.398 11:15:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.398 11:15:48 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:24:40.398 11:15:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.398 11:15:48 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:24:40.398 11:15:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.mufboOTJKP 00:24:40.398 11:15:48 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.656 [2024-04-18 11:15:48.830170] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.656 11:15:48 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:40.913 11:15:49 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.479 [2024-04-18 11:15:49.406342] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.479 [2024-04-18 11:15:49.406622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.479 11:15:49 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:41.479 malloc0 00:24:41.479 11:15:49 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.044 11:15:49 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:42.044 [2024-04-18 11:15:50.240016] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:42.044 [2024-04-18 11:15:50.240076] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:42.044 [2024-04-18 11:15:50.240123] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:24:42.044 2024/04/18 11:15:50 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.mufboOTJKP], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:24:42.044 request: 00:24:42.044 { 00:24:42.044 "method": "nvmf_subsystem_add_host", 00:24:42.044 "params": { 00:24:42.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.044 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.044 "psk": "/tmp/tmp.mufboOTJKP" 00:24:42.044 } 00:24:42.044 } 00:24:42.044 Got JSON-RPC error response 00:24:42.044 GoRPCClient: error on JSON-RPC call 00:24:42.044 11:15:50 -- common/autotest_common.sh@641 -- # es=1 00:24:42.044 11:15:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:42.044 11:15:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:42.044 11:15:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:42.044 11:15:50 -- target/tls.sh@180 -- # killprocess 79861 00:24:42.044 11:15:50 -- common/autotest_common.sh@936 -- # '[' -z 79861 ']' 00:24:42.044 11:15:50 -- common/autotest_common.sh@940 -- # kill -0 79861 00:24:42.044 11:15:50 -- common/autotest_common.sh@941 -- # uname 00:24:42.302 11:15:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.302 11:15:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79861 00:24:42.302 killing process with pid 79861 00:24:42.302 11:15:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:42.302 11:15:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:42.302 11:15:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79861' 00:24:42.302 11:15:50 -- common/autotest_common.sh@955 -- # kill 79861 00:24:42.302 11:15:50 -- common/autotest_common.sh@960 -- # wait 79861 00:24:43.694 11:15:51 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.mufboOTJKP 00:24:43.694 11:15:51 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:43.694 11:15:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:43.695 11:15:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:43.695 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:24:43.695 11:15:51 -- nvmf/common.sh@470 -- # nvmfpid=79984 00:24:43.695 11:15:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.695 11:15:51 -- nvmf/common.sh@471 -- # waitforlisten 79984 00:24:43.695 11:15:51 -- common/autotest_common.sh@817 -- # '[' -z 79984 ']' 00:24:43.695 11:15:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.695 11:15:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.695 11:15:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.695 11:15:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.695 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:24:43.695 [2024-04-18 11:15:51.679899] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:43.695 [2024-04-18 11:15:51.680079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.695 [2024-04-18 11:15:51.856017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.952 [2024-04-18 11:15:52.098593] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.952 [2024-04-18 11:15:52.098678] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.952 [2024-04-18 11:15:52.098698] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.952 [2024-04-18 11:15:52.098723] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.952 [2024-04-18 11:15:52.098738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.952 [2024-04-18 11:15:52.098782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.517 11:15:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:44.517 11:15:52 -- common/autotest_common.sh@850 -- # return 0 00:24:44.517 11:15:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:44.517 11:15:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:44.517 11:15:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.517 11:15:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.517 11:15:52 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:24:44.517 11:15:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.mufboOTJKP 00:24:44.517 11:15:52 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:44.776 [2024-04-18 11:15:52.887149] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.776 11:15:52 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:45.034 11:15:53 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:45.291 [2024-04-18 11:15:53.383297] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.291 [2024-04-18 11:15:53.383586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.291 11:15:53 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:45.549 malloc0 00:24:45.549 11:15:53 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:45.806 11:15:54 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:46.064 [2024-04-18 11:15:54.264006] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:46.322 11:15:54 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.322 11:15:54 -- target/tls.sh@188 -- # bdevperf_pid=80092 00:24:46.322 11:15:54 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.322 11:15:54 -- target/tls.sh@191 -- # waitforlisten 80092 /var/tmp/bdevperf.sock 00:24:46.322 11:15:54 -- common/autotest_common.sh@817 -- # '[' -z 80092 ']' 00:24:46.322 11:15:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.322 11:15:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:46.322 11:15:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.322 11:15:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:46.322 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:46.322 [2024-04-18 11:15:54.368025] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:46.322 [2024-04-18 11:15:54.368204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80092 ] 00:24:46.322 [2024-04-18 11:15:54.535526] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.886 [2024-04-18 11:15:54.810776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.143 11:15:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:47.144 11:15:55 -- common/autotest_common.sh@850 -- # return 0 00:24:47.144 11:15:55 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:24:47.401 [2024-04-18 11:15:55.485511] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.401 [2024-04-18 11:15:55.486293] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:47.401 TLSTESTn1 00:24:47.401 11:15:55 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:47.966 11:15:55 -- target/tls.sh@196 -- # tgtconf='{ 00:24:47.966 "subsystems": [ 00:24:47.966 { 00:24:47.966 "subsystem": "keyring", 00:24:47.966 "config": [] 00:24:47.966 }, 00:24:47.966 { 00:24:47.966 "subsystem": "iobuf", 00:24:47.966 "config": [ 00:24:47.966 { 00:24:47.966 "method": "iobuf_set_options", 00:24:47.966 "params": { 00:24:47.966 "large_bufsize": 135168, 00:24:47.966 "large_pool_count": 1024, 00:24:47.966 "small_bufsize": 8192, 00:24:47.966 "small_pool_count": 8192 00:24:47.966 } 00:24:47.966 } 00:24:47.966 ] 00:24:47.966 }, 00:24:47.966 { 00:24:47.966 "subsystem": "sock", 00:24:47.966 "config": [ 00:24:47.966 { 00:24:47.966 "method": "sock_impl_set_options", 00:24:47.966 "params": { 00:24:47.966 "enable_ktls": false, 00:24:47.966 "enable_placement_id": 0, 00:24:47.966 "enable_quickack": false, 00:24:47.966 "enable_recv_pipe": true, 00:24:47.966 "enable_zerocopy_send_client": false, 00:24:47.966 "enable_zerocopy_send_server": true, 00:24:47.966 "impl_name": "posix", 00:24:47.966 "recv_buf_size": 2097152, 00:24:47.966 "send_buf_size": 2097152, 00:24:47.966 "tls_version": 0, 00:24:47.966 "zerocopy_threshold": 0 00:24:47.966 } 00:24:47.966 }, 00:24:47.966 { 00:24:47.966 "method": "sock_impl_set_options", 00:24:47.966 "params": { 00:24:47.966 "enable_ktls": false, 00:24:47.966 "enable_placement_id": 0, 00:24:47.966 "enable_quickack": false, 00:24:47.966 "enable_recv_pipe": true, 00:24:47.966 "enable_zerocopy_send_client": false, 00:24:47.966 "enable_zerocopy_send_server": true, 00:24:47.966 "impl_name": "ssl", 00:24:47.966 "recv_buf_size": 4096, 00:24:47.966 "send_buf_size": 4096, 00:24:47.966 "tls_version": 0, 00:24:47.966 "zerocopy_threshold": 0 00:24:47.966 } 00:24:47.966 } 00:24:47.966 ] 00:24:47.966 }, 00:24:47.966 { 00:24:47.966 "subsystem": "vmd", 00:24:47.966 "config": [] 00:24:47.966 }, 00:24:47.966 { 00:24:47.966 "subsystem": "accel", 00:24:47.966 "config": [ 00:24:47.966 { 00:24:47.966 "method": "accel_set_options", 00:24:47.966 "params": { 00:24:47.966 "buf_count": 2048, 00:24:47.966 "large_cache_size": 16, 00:24:47.966 "sequence_count": 2048, 00:24:47.966 "small_cache_size": 128, 00:24:47.966 "task_count": 2048 00:24:47.966 } 00:24:47.966 } 00:24:47.966 ] 00:24:47.966 }, 00:24:47.966 { 00:24:47.967 "subsystem": "bdev", 00:24:47.967 "config": [ 00:24:47.967 { 00:24:47.967 "method": "bdev_set_options", 00:24:47.967 "params": { 00:24:47.967 "bdev_auto_examine": true, 00:24:47.967 "bdev_io_cache_size": 256, 00:24:47.967 "bdev_io_pool_size": 65535, 00:24:47.967 "iobuf_large_cache_size": 16, 00:24:47.967 "iobuf_small_cache_size": 128 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_raid_set_options", 00:24:47.967 "params": { 00:24:47.967 "process_window_size_kb": 1024 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_iscsi_set_options", 00:24:47.967 "params": { 00:24:47.967 "timeout_sec": 30 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_nvme_set_options", 00:24:47.967 "params": { 00:24:47.967 "action_on_timeout": "none", 00:24:47.967 "allow_accel_sequence": false, 00:24:47.967 "arbitration_burst": 0, 00:24:47.967 "bdev_retry_count": 3, 00:24:47.967 "ctrlr_loss_timeout_sec": 0, 00:24:47.967 "delay_cmd_submit": true, 00:24:47.967 "dhchap_dhgroups": [ 00:24:47.967 "null", 00:24:47.967 "ffdhe2048", 00:24:47.967 "ffdhe3072", 00:24:47.967 "ffdhe4096", 00:24:47.967 "ffdhe6144", 00:24:47.967 "ffdhe8192" 00:24:47.967 ], 00:24:47.967 "dhchap_digests": [ 00:24:47.967 "sha256", 00:24:47.967 "sha384", 00:24:47.967 "sha512" 00:24:47.967 ], 00:24:47.967 "disable_auto_failback": false, 00:24:47.967 "fast_io_fail_timeout_sec": 0, 00:24:47.967 "generate_uuids": false, 00:24:47.967 "high_priority_weight": 0, 00:24:47.967 "io_path_stat": false, 00:24:47.967 "io_queue_requests": 0, 00:24:47.967 "keep_alive_timeout_ms": 10000, 00:24:47.967 "low_priority_weight": 0, 00:24:47.967 "medium_priority_weight": 0, 00:24:47.967 "nvme_adminq_poll_period_us": 10000, 00:24:47.967 "nvme_error_stat": false, 00:24:47.967 "nvme_ioq_poll_period_us": 0, 00:24:47.967 "rdma_cm_event_timeout_ms": 0, 00:24:47.967 "rdma_max_cq_size": 0, 00:24:47.967 "rdma_srq_size": 0, 00:24:47.967 "reconnect_delay_sec": 0, 00:24:47.967 "timeout_admin_us": 0, 00:24:47.967 "timeout_us": 0, 00:24:47.967 "transport_ack_timeout": 0, 00:24:47.967 "transport_retry_count": 4, 00:24:47.967 "transport_tos": 0 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_nvme_set_hotplug", 00:24:47.967 "params": { 00:24:47.967 "enable": false, 00:24:47.967 "period_us": 100000 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_malloc_create", 00:24:47.967 "params": { 00:24:47.967 "block_size": 4096, 00:24:47.967 "name": "malloc0", 00:24:47.967 "num_blocks": 8192, 00:24:47.967 "optimal_io_boundary": 0, 00:24:47.967 "physical_block_size": 4096, 00:24:47.967 "uuid": "e0e94da6-5911-4960-a1e1-d403e4938ddf" 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "bdev_wait_for_examine" 00:24:47.967 } 00:24:47.967 ] 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "subsystem": "nbd", 00:24:47.967 "config": [] 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "subsystem": "scheduler", 00:24:47.967 "config": [ 00:24:47.967 { 00:24:47.967 "method": "framework_set_scheduler", 00:24:47.967 "params": { 00:24:47.967 "name": "static" 00:24:47.967 } 00:24:47.967 } 00:24:47.967 ] 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "subsystem": "nvmf", 00:24:47.967 "config": [ 00:24:47.967 { 00:24:47.967 "method": "nvmf_set_config", 00:24:47.967 "params": { 00:24:47.967 "admin_cmd_passthru": { 00:24:47.967 "identify_ctrlr": false 00:24:47.967 }, 00:24:47.967 "discovery_filter": "match_any" 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_set_max_subsystems", 00:24:47.967 "params": { 00:24:47.967 "max_subsystems": 1024 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_set_crdt", 00:24:47.967 "params": { 00:24:47.967 "crdt1": 0, 00:24:47.967 "crdt2": 0, 00:24:47.967 "crdt3": 0 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_create_transport", 00:24:47.967 "params": { 00:24:47.967 "abort_timeout_sec": 1, 00:24:47.967 "ack_timeout": 0, 00:24:47.967 "buf_cache_size": 4294967295, 00:24:47.967 "c2h_success": false, 00:24:47.967 "dif_insert_or_strip": false, 00:24:47.967 "in_capsule_data_size": 4096, 00:24:47.967 "io_unit_size": 131072, 00:24:47.967 "max_aq_depth": 128, 00:24:47.967 "max_io_qpairs_per_ctrlr": 127, 00:24:47.967 "max_io_size": 131072, 00:24:47.967 "max_queue_depth": 128, 00:24:47.967 "num_shared_buffers": 511, 00:24:47.967 "sock_priority": 0, 00:24:47.967 "trtype": "TCP", 00:24:47.967 "zcopy": false 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_create_subsystem", 00:24:47.967 "params": { 00:24:47.967 "allow_any_host": false, 00:24:47.967 "ana_reporting": false, 00:24:47.967 "max_cntlid": 65519, 00:24:47.967 "max_namespaces": 10, 00:24:47.967 "min_cntlid": 1, 00:24:47.967 "model_number": "SPDK bdev Controller", 00:24:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.967 "serial_number": "SPDK00000000000001" 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_subsystem_add_host", 00:24:47.967 "params": { 00:24:47.967 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.967 "psk": "/tmp/tmp.mufboOTJKP" 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_subsystem_add_ns", 00:24:47.967 "params": { 00:24:47.967 "namespace": { 00:24:47.967 "bdev_name": "malloc0", 00:24:47.967 "nguid": "E0E94DA659114960A1E1D403E4938DDF", 00:24:47.967 "no_auto_visible": false, 00:24:47.967 "nsid": 1, 00:24:47.967 "uuid": "e0e94da6-5911-4960-a1e1-d403e4938ddf" 00:24:47.967 }, 00:24:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:47.967 } 00:24:47.967 }, 00:24:47.967 { 00:24:47.967 "method": "nvmf_subsystem_add_listener", 00:24:47.967 "params": { 00:24:47.967 "listen_address": { 00:24:47.967 "adrfam": "IPv4", 00:24:47.967 "traddr": "10.0.0.2", 00:24:47.967 "trsvcid": "4420", 00:24:47.967 "trtype": "TCP" 00:24:47.967 }, 00:24:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.967 "secure_channel": true 00:24:47.967 } 00:24:47.967 } 00:24:47.967 ] 00:24:47.967 } 00:24:47.967 ] 00:24:47.967 }' 00:24:47.967 11:15:55 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:48.225 11:15:56 -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:48.225 "subsystems": [ 00:24:48.225 { 00:24:48.225 "subsystem": "keyring", 00:24:48.225 "config": [] 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "subsystem": "iobuf", 00:24:48.225 "config": [ 00:24:48.225 { 00:24:48.225 "method": "iobuf_set_options", 00:24:48.225 "params": { 00:24:48.225 "large_bufsize": 135168, 00:24:48.225 "large_pool_count": 1024, 00:24:48.225 "small_bufsize": 8192, 00:24:48.225 "small_pool_count": 8192 00:24:48.225 } 00:24:48.225 } 00:24:48.225 ] 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "subsystem": "sock", 00:24:48.225 "config": [ 00:24:48.225 { 00:24:48.225 "method": "sock_impl_set_options", 00:24:48.225 "params": { 00:24:48.225 "enable_ktls": false, 00:24:48.225 "enable_placement_id": 0, 00:24:48.225 "enable_quickack": false, 00:24:48.225 "enable_recv_pipe": true, 00:24:48.225 "enable_zerocopy_send_client": false, 00:24:48.225 "enable_zerocopy_send_server": true, 00:24:48.225 "impl_name": "posix", 00:24:48.225 "recv_buf_size": 2097152, 00:24:48.225 "send_buf_size": 2097152, 00:24:48.225 "tls_version": 0, 00:24:48.225 "zerocopy_threshold": 0 00:24:48.225 } 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "method": "sock_impl_set_options", 00:24:48.225 "params": { 00:24:48.225 "enable_ktls": false, 00:24:48.225 "enable_placement_id": 0, 00:24:48.225 "enable_quickack": false, 00:24:48.225 "enable_recv_pipe": true, 00:24:48.225 "enable_zerocopy_send_client": false, 00:24:48.225 "enable_zerocopy_send_server": true, 00:24:48.225 "impl_name": "ssl", 00:24:48.225 "recv_buf_size": 4096, 00:24:48.225 "send_buf_size": 4096, 00:24:48.225 "tls_version": 0, 00:24:48.225 "zerocopy_threshold": 0 00:24:48.225 } 00:24:48.225 } 00:24:48.225 ] 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "subsystem": "vmd", 00:24:48.225 "config": [] 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "subsystem": "accel", 00:24:48.225 "config": [ 00:24:48.225 { 00:24:48.225 "method": "accel_set_options", 00:24:48.225 "params": { 00:24:48.225 "buf_count": 2048, 00:24:48.225 "large_cache_size": 16, 00:24:48.225 "sequence_count": 2048, 00:24:48.225 "small_cache_size": 128, 00:24:48.225 "task_count": 2048 00:24:48.225 } 00:24:48.225 } 00:24:48.225 ] 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "subsystem": "bdev", 00:24:48.225 "config": [ 00:24:48.225 { 00:24:48.225 "method": "bdev_set_options", 00:24:48.225 "params": { 00:24:48.225 "bdev_auto_examine": true, 00:24:48.225 "bdev_io_cache_size": 256, 00:24:48.225 "bdev_io_pool_size": 65535, 00:24:48.225 "iobuf_large_cache_size": 16, 00:24:48.225 "iobuf_small_cache_size": 128 00:24:48.225 } 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "method": "bdev_raid_set_options", 00:24:48.225 "params": { 00:24:48.225 "process_window_size_kb": 1024 00:24:48.225 } 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "method": "bdev_iscsi_set_options", 00:24:48.225 "params": { 00:24:48.225 "timeout_sec": 30 00:24:48.225 } 00:24:48.225 }, 00:24:48.225 { 00:24:48.225 "method": "bdev_nvme_set_options", 00:24:48.225 "params": { 00:24:48.225 "action_on_timeout": "none", 00:24:48.225 "allow_accel_sequence": false, 00:24:48.225 "arbitration_burst": 0, 00:24:48.225 "bdev_retry_count": 3, 00:24:48.225 "ctrlr_loss_timeout_sec": 0, 00:24:48.225 "delay_cmd_submit": true, 00:24:48.225 "dhchap_dhgroups": [ 00:24:48.225 "null", 00:24:48.225 "ffdhe2048", 00:24:48.225 "ffdhe3072", 00:24:48.225 "ffdhe4096", 00:24:48.225 "ffdhe6144", 00:24:48.225 "ffdhe8192" 00:24:48.225 ], 00:24:48.225 "dhchap_digests": [ 00:24:48.225 "sha256", 00:24:48.225 "sha384", 00:24:48.225 "sha512" 00:24:48.225 ], 00:24:48.226 "disable_auto_failback": false, 00:24:48.226 "fast_io_fail_timeout_sec": 0, 00:24:48.226 "generate_uuids": false, 00:24:48.226 "high_priority_weight": 0, 00:24:48.226 "io_path_stat": false, 00:24:48.226 "io_queue_requests": 512, 00:24:48.226 "keep_alive_timeout_ms": 10000, 00:24:48.226 "low_priority_weight": 0, 00:24:48.226 "medium_priority_weight": 0, 00:24:48.226 "nvme_adminq_poll_period_us": 10000, 00:24:48.226 "nvme_error_stat": false, 00:24:48.226 "nvme_ioq_poll_period_us": 0, 00:24:48.226 "rdma_cm_event_timeout_ms": 0, 00:24:48.226 "rdma_max_cq_size": 0, 00:24:48.226 "rdma_srq_size": 0, 00:24:48.226 "reconnect_delay_sec": 0, 00:24:48.226 "timeout_admin_us": 0, 00:24:48.226 "timeout_us": 0, 00:24:48.226 "transport_ack_timeout": 0, 00:24:48.226 "transport_retry_count": 4, 00:24:48.226 "transport_tos": 0 00:24:48.226 } 00:24:48.226 }, 00:24:48.226 { 00:24:48.226 "method": "bdev_nvme_attach_controller", 00:24:48.226 "params": { 00:24:48.226 "adrfam": "IPv4", 00:24:48.226 "ctrlr_loss_timeout_sec": 0, 00:24:48.226 "ddgst": false, 00:24:48.226 "fast_io_fail_timeout_sec": 0, 00:24:48.226 "hdgst": false, 00:24:48.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.226 "name": "TLSTEST", 00:24:48.226 "prchk_guard": false, 00:24:48.226 "prchk_reftag": false, 00:24:48.226 "psk": "/tmp/tmp.mufboOTJKP", 00:24:48.226 "reconnect_delay_sec": 0, 00:24:48.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.226 "traddr": "10.0.0.2", 00:24:48.226 "trsvcid": "4420", 00:24:48.226 "trtype": "TCP" 00:24:48.226 } 00:24:48.226 }, 00:24:48.226 { 00:24:48.226 "method": "bdev_nvme_set_hotplug", 00:24:48.226 "params": { 00:24:48.226 "enable": false, 00:24:48.226 "period_us": 100000 00:24:48.226 } 00:24:48.226 }, 00:24:48.226 { 00:24:48.226 "method": "bdev_wait_for_examine" 00:24:48.226 } 00:24:48.226 ] 00:24:48.226 }, 00:24:48.226 { 00:24:48.226 "subsystem": "nbd", 00:24:48.226 "config": [] 00:24:48.226 } 00:24:48.226 ] 00:24:48.226 }' 00:24:48.226 11:15:56 -- target/tls.sh@199 -- # killprocess 80092 00:24:48.226 11:15:56 -- common/autotest_common.sh@936 -- # '[' -z 80092 ']' 00:24:48.226 11:15:56 -- common/autotest_common.sh@940 -- # kill -0 80092 00:24:48.226 11:15:56 -- common/autotest_common.sh@941 -- # uname 00:24:48.226 11:15:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.226 11:15:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80092 00:24:48.226 11:15:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:48.226 killing process with pid 80092 00:24:48.226 11:15:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:48.226 11:15:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80092' 00:24:48.226 11:15:56 -- common/autotest_common.sh@955 -- # kill 80092 00:24:48.226 11:15:56 -- common/autotest_common.sh@960 -- # wait 80092 00:24:48.226 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.226 00:24:48.226 Latency(us) 00:24:48.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.226 =================================================================================================================== 00:24:48.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.226 [2024-04-18 11:15:56.294522] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:49.599 11:15:57 -- target/tls.sh@200 -- # killprocess 79984 00:24:49.599 11:15:57 -- common/autotest_common.sh@936 -- # '[' -z 79984 ']' 00:24:49.599 11:15:57 -- common/autotest_common.sh@940 -- # kill -0 79984 00:24:49.599 11:15:57 -- common/autotest_common.sh@941 -- # uname 00:24:49.599 11:15:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.599 11:15:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79984 00:24:49.599 11:15:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:49.599 11:15:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:49.599 killing process with pid 79984 00:24:49.599 11:15:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79984' 00:24:49.599 11:15:57 -- common/autotest_common.sh@955 -- # kill 79984 00:24:49.599 [2024-04-18 11:15:57.470111] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:49.599 11:15:57 -- common/autotest_common.sh@960 -- # wait 79984 00:24:50.560 11:15:58 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:50.560 11:15:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:50.560 11:15:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:50.560 11:15:58 -- target/tls.sh@203 -- # echo '{ 00:24:50.560 "subsystems": [ 00:24:50.560 { 00:24:50.560 "subsystem": "keyring", 00:24:50.560 "config": [] 00:24:50.560 }, 00:24:50.560 { 00:24:50.560 "subsystem": "iobuf", 00:24:50.560 "config": [ 00:24:50.560 { 00:24:50.560 "method": "iobuf_set_options", 00:24:50.560 "params": { 00:24:50.560 "large_bufsize": 135168, 00:24:50.560 "large_pool_count": 1024, 00:24:50.560 "small_bufsize": 8192, 00:24:50.560 "small_pool_count": 8192 00:24:50.560 } 00:24:50.560 } 00:24:50.560 ] 00:24:50.560 }, 00:24:50.560 { 00:24:50.560 "subsystem": "sock", 00:24:50.560 "config": [ 00:24:50.560 { 00:24:50.560 "method": "sock_impl_set_options", 00:24:50.560 "params": { 00:24:50.560 "enable_ktls": false, 00:24:50.560 "enable_placement_id": 0, 00:24:50.560 "enable_quickack": false, 00:24:50.560 "enable_recv_pipe": true, 00:24:50.560 "enable_zerocopy_send_client": false, 00:24:50.560 "enable_zerocopy_send_server": true, 00:24:50.560 "impl_name": "posix", 00:24:50.560 "recv_buf_size": 2097152, 00:24:50.560 "send_buf_size": 2097152, 00:24:50.560 "tls_version": 0, 00:24:50.560 "zerocopy_threshold": 0 00:24:50.560 } 00:24:50.560 }, 00:24:50.560 { 00:24:50.560 "method": "sock_impl_set_options", 00:24:50.560 "params": { 00:24:50.560 "enable_ktls": false, 00:24:50.560 "enable_placement_id": 0, 00:24:50.560 "enable_quickack": false, 00:24:50.560 "enable_recv_pipe": true, 00:24:50.560 "enable_zerocopy_send_client": false, 00:24:50.560 "enable_zerocopy_send_server": true, 00:24:50.560 "impl_name": "ssl", 00:24:50.560 "recv_buf_size": 4096, 00:24:50.560 "send_buf_size": 4096, 00:24:50.560 "tls_version": 0, 00:24:50.560 "zerocopy_threshold": 0 00:24:50.560 } 00:24:50.560 } 00:24:50.560 ] 00:24:50.560 }, 00:24:50.561 { 00:24:50.561 "subsystem": "vmd", 00:24:50.561 "config": [] 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "subsystem": "accel", 00:24:50.561 "config": [ 00:24:50.561 { 00:24:50.561 "method": "accel_set_options", 00:24:50.561 "params": { 00:24:50.561 "buf_count": 2048, 00:24:50.561 "large_cache_size": 16, 00:24:50.561 "sequence_count": 2048, 00:24:50.561 "small_cache_size": 128, 00:24:50.561 "task_count": 2048 00:24:50.561 } 00:24:50.561 } 00:24:50.561 ] 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "subsystem": "bdev", 00:24:50.561 "config": [ 00:24:50.561 { 00:24:50.561 "method": "bdev_set_options", 00:24:50.561 "params": { 00:24:50.561 "bdev_auto_examine": true, 00:24:50.561 "bdev_io_cache_size": 256, 00:24:50.561 "bdev_io_pool_size": 65535, 00:24:50.561 "iobuf_large_cache_size": 16, 00:24:50.561 "iobuf_small_cache_size": 128 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_raid_set_options", 00:24:50.561 "params": { 00:24:50.561 "process_window_size_kb": 1024 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_iscsi_set_options", 00:24:50.561 "params": { 00:24:50.561 "timeout_sec": 30 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_nvme_set_options", 00:24:50.561 "params": { 00:24:50.561 "action_on_timeout": "none", 00:24:50.561 "allow_accel_sequence": false, 00:24:50.561 "arbitration_burst": 0, 00:24:50.561 "bdev_retry_count": 3, 00:24:50.561 "ctrlr_loss_timeout_sec": 0, 00:24:50.561 "delay_cmd_submit": true, 00:24:50.561 "dhchap_dhgroups": [ 00:24:50.561 "null", 00:24:50.561 "ffdhe2048", 00:24:50.561 "ffdhe3072", 00:24:50.561 "ffdhe4096", 00:24:50.561 "ffdhe6144", 00:24:50.561 "ffdhe8192" 00:24:50.561 ], 00:24:50.561 "dhchap_digests": [ 00:24:50.561 "sha256", 00:24:50.561 "sha384", 00:24:50.561 "sha512" 00:24:50.561 ], 00:24:50.561 "disable_auto_failback": false, 00:24:50.561 "fast_io_fail_timeout_sec": 0, 00:24:50.561 "generate_uuids": false, 00:24:50.561 "high_priority_weight": 0, 00:24:50.561 "io_path_stat": false, 00:24:50.561 "io_queue_requests": 0, 00:24:50.561 "keep_alive_timeout_ms": 10000, 00:24:50.561 "low_priority_weight": 0, 00:24:50.561 "medium_priority_weight": 0, 00:24:50.561 "nvme_adminq_poll_period_us": 10000, 00:24:50.561 "nvme_error_stat": false, 00:24:50.561 "nvme_ioq_poll_period_us": 0, 00:24:50.561 "rdma_cm_event_timeout_ms": 0, 00:24:50.561 "rdma_max_cq_size": 0, 00:24:50.561 "rdma_srq_size": 0, 00:24:50.561 "reconnect_delay_sec": 0, 00:24:50.561 "timeout_admin_us": 0, 00:24:50.561 "timeout_us": 0, 00:24:50.561 "transport_ack_timeout": 0, 00:24:50.561 "transport_retry_count": 4, 00:24:50.561 "transport_tos": 0 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_nvme_set_hotplug", 00:24:50.561 "params": { 00:24:50.561 "enable": false, 00:24:50.561 "period_us": 100000 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_malloc_create", 00:24:50.561 "params": { 00:24:50.561 "block_size": 4096, 00:24:50.561 "name": "malloc0", 00:24:50.561 "num_blocks": 8192, 00:24:50.561 "optimal_io_boundary": 0, 00:24:50.561 "physical_block_size": 4096, 00:24:50.561 "uuid": "e0e94da6-5911-4960-a1e1-d403e4938ddf" 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "bdev_wait_for_examine" 00:24:50.561 } 00:24:50.561 ] 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "subsystem": "nbd", 00:24:50.561 "config": [] 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "subsystem": "scheduler", 00:24:50.561 "config": [ 00:24:50.561 { 00:24:50.561 "method": "framework_set_scheduler", 00:24:50.561 "params": { 00:24:50.561 "name": "static" 00:24:50.561 } 00:24:50.561 } 00:24:50.561 ] 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "subsystem": "nvmf", 00:24:50.561 "config": [ 00:24:50.561 { 00:24:50.561 "method": "nvmf_set_config", 00:24:50.561 "params": { 00:24:50.561 "admin_cmd_passthru": { 00:24:50.561 "identify_ctrlr": false 00:24:50.561 }, 00:24:50.561 "discovery_filter": "match_any" 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_set_max_subsystems", 00:24:50.561 "params": { 00:24:50.561 "max_subsystems": 1024 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_set_crdt", 00:24:50.561 "params": { 00:24:50.561 "crdt1": 0, 00:24:50.561 "crdt2": 0, 00:24:50.561 "crdt3": 0 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_create_transport", 00:24:50.561 "params": { 00:24:50.561 "abort_timeout_sec": 1, 00:24:50.561 "ack_timeout": 0, 00:24:50.561 "buf_cache_size": 4294967295, 00:24:50.561 "c2h_success": false, 00:24:50.561 "dif_insert_or_strip": false, 00:24:50.561 "in_capsule_data_size": 4096, 00:24:50.561 "io_unit_size": 131072, 00:24:50.561 "max_aq_depth": 128, 00:24:50.561 "max_io_qpairs_per_ctrlr": 127, 00:24:50.561 "max_io_size": 131072, 00:24:50.561 "max_queue_depth": 128, 00:24:50.561 "num_shared_buffers": 511, 00:24:50.561 "sock_priority": 0, 00:24:50.561 "trtype": "TCP", 00:24:50.561 "zcopy": false 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_create_subsystem", 00:24:50.561 "params": { 00:24:50.561 "allow_any_host": false, 00:24:50.561 "ana_reporting": false, 00:24:50.561 "max_cntlid": 65519, 00:24:50.561 "max_namespaces": 10, 00:24:50.561 "min_cntlid": 1, 00:24:50.561 "model_number": "SPDK bdev Controller", 00:24:50.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.561 "serial_number": "SPDK00000000000001" 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_subsystem_add_host", 00:24:50.561 "params": { 00:24:50.561 "host": "nqn.2016-06.io.spdk:host1", 00:24:50.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.561 "psk": "/tmp/tmp.mufboOTJKP" 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_subsystem_add_ns", 00:24:50.561 "params": { 00:24:50.561 "namespace": { 00:24:50.561 "bdev_name": "malloc0", 00:24:50.561 "nguid": "E0E94DA659114960A1E1D403E4938DDF", 00:24:50.561 "no_auto_visible": false, 00:24:50.561 "nsid": 1, 00:24:50.561 "uuid": "e0e94da6-5911-4960-a1e1-d403e4938ddf" 00:24:50.561 }, 00:24:50.561 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:50.561 } 00:24:50.561 }, 00:24:50.561 { 00:24:50.561 "method": "nvmf_subsystem_add_listener", 00:24:50.561 "params": { 00:24:50.561 "listen_address": { 00:24:50.561 "adrfam": "IPv4", 00:24:50.561 "traddr": "10.0.0.2", 00:24:50.561 "trsvcid": "4420", 00:24:50.561 "trtype": "TCP" 00:24:50.561 }, 00:24:50.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.561 "secure_channel": true 00:24:50.561 } 00:24:50.561 } 00:24:50.561 ] 00:24:50.561 } 00:24:50.561 ] 00:24:50.561 }' 00:24:50.561 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:24:50.561 11:15:58 -- nvmf/common.sh@470 -- # nvmfpid=80185 00:24:50.561 11:15:58 -- nvmf/common.sh@471 -- # waitforlisten 80185 00:24:50.561 11:15:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:50.561 11:15:58 -- common/autotest_common.sh@817 -- # '[' -z 80185 ']' 00:24:50.561 11:15:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.561 11:15:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:50.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.561 11:15:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.561 11:15:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:50.561 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:24:50.820 [2024-04-18 11:15:58.858316] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:50.820 [2024-04-18 11:15:58.858474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.820 [2024-04-18 11:15:59.035636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.078 [2024-04-18 11:15:59.265719] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.078 [2024-04-18 11:15:59.265795] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.078 [2024-04-18 11:15:59.265813] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.078 [2024-04-18 11:15:59.265838] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.078 [2024-04-18 11:15:59.265852] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.078 [2024-04-18 11:15:59.266014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.648 [2024-04-18 11:15:59.735462] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.648 [2024-04-18 11:15:59.751423] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:51.648 [2024-04-18 11:15:59.767407] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.648 [2024-04-18 11:15:59.767652] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.648 11:15:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:51.648 11:15:59 -- common/autotest_common.sh@850 -- # return 0 00:24:51.648 11:15:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:51.648 11:15:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.648 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.648 11:15:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.648 11:15:59 -- target/tls.sh@207 -- # bdevperf_pid=80228 00:24:51.648 11:15:59 -- target/tls.sh@208 -- # waitforlisten 80228 /var/tmp/bdevperf.sock 00:24:51.648 11:15:59 -- common/autotest_common.sh@817 -- # '[' -z 80228 ']' 00:24:51.648 11:15:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.648 11:15:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.649 11:15:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.649 11:15:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.649 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.649 11:15:59 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:51.649 11:15:59 -- target/tls.sh@204 -- # echo '{ 00:24:51.649 "subsystems": [ 00:24:51.649 { 00:24:51.649 "subsystem": "keyring", 00:24:51.649 "config": [] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "iobuf", 00:24:51.649 "config": [ 00:24:51.649 { 00:24:51.649 "method": "iobuf_set_options", 00:24:51.649 "params": { 00:24:51.649 "large_bufsize": 135168, 00:24:51.649 "large_pool_count": 1024, 00:24:51.649 "small_bufsize": 8192, 00:24:51.649 "small_pool_count": 8192 00:24:51.649 } 00:24:51.649 } 00:24:51.649 ] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "sock", 00:24:51.649 "config": [ 00:24:51.649 { 00:24:51.649 "method": "sock_impl_set_options", 00:24:51.649 "params": { 00:24:51.649 "enable_ktls": false, 00:24:51.649 "enable_placement_id": 0, 00:24:51.649 "enable_quickack": false, 00:24:51.649 "enable_recv_pipe": true, 00:24:51.649 "enable_zerocopy_send_client": false, 00:24:51.649 "enable_zerocopy_send_server": true, 00:24:51.649 "impl_name": "posix", 00:24:51.649 "recv_buf_size": 2097152, 00:24:51.649 "send_buf_size": 2097152, 00:24:51.649 "tls_version": 0, 00:24:51.649 "zerocopy_threshold": 0 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "sock_impl_set_options", 00:24:51.649 "params": { 00:24:51.649 "enable_ktls": false, 00:24:51.649 "enable_placement_id": 0, 00:24:51.649 "enable_quickack": false, 00:24:51.649 "enable_recv_pipe": true, 00:24:51.649 "enable_zerocopy_send_client": false, 00:24:51.649 "enable_zerocopy_send_server": true, 00:24:51.649 "impl_name": "ssl", 00:24:51.649 "recv_buf_size": 4096, 00:24:51.649 "send_buf_size": 4096, 00:24:51.649 "tls_version": 0, 00:24:51.649 "zerocopy_threshold": 0 00:24:51.649 } 00:24:51.649 } 00:24:51.649 ] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "vmd", 00:24:51.649 "config": [] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "accel", 00:24:51.649 "config": [ 00:24:51.649 { 00:24:51.649 "method": "accel_set_options", 00:24:51.649 "params": { 00:24:51.649 "buf_count": 2048, 00:24:51.649 "large_cache_size": 16, 00:24:51.649 "sequence_count": 2048, 00:24:51.649 "small_cache_size": 128, 00:24:51.649 "task_count": 2048 00:24:51.649 } 00:24:51.649 } 00:24:51.649 ] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "bdev", 00:24:51.649 "config": [ 00:24:51.649 { 00:24:51.649 "method": "bdev_set_options", 00:24:51.649 "params": { 00:24:51.649 "bdev_auto_examine": true, 00:24:51.649 "bdev_io_cache_size": 256, 00:24:51.649 "bdev_io_pool_size": 65535, 00:24:51.649 "iobuf_large_cache_size": 16, 00:24:51.649 "iobuf_small_cache_size": 128 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_raid_set_options", 00:24:51.649 "params": { 00:24:51.649 "process_window_size_kb": 1024 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_iscsi_set_options", 00:24:51.649 "params": { 00:24:51.649 "timeout_sec": 30 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_nvme_set_options", 00:24:51.649 "params": { 00:24:51.649 "action_on_timeout": "none", 00:24:51.649 "allow_accel_sequence": false, 00:24:51.649 "arbitration_burst": 0, 00:24:51.649 "bdev_retry_count": 3, 00:24:51.649 "ctrlr_loss_timeout_sec": 0, 00:24:51.649 "delay_cmd_submit": true, 00:24:51.649 "dhchap_dhgroups": [ 00:24:51.649 "null", 00:24:51.649 "ffdhe2048", 00:24:51.649 "ffdhe3072", 00:24:51.649 "ffdhe4096", 00:24:51.649 "ffdhe6144", 00:24:51.649 "ffdhe8192" 00:24:51.649 ], 00:24:51.649 "dhchap_digests": [ 00:24:51.649 "sha256", 00:24:51.649 "sha384", 00:24:51.649 "sha512" 00:24:51.649 ], 00:24:51.649 "disable_auto_failback": false, 00:24:51.649 "fast_io_fail_timeout_sec": 0, 00:24:51.649 "generate_uuids": false, 00:24:51.649 "high_priority_weight": 0, 00:24:51.649 "io_path_stat": false, 00:24:51.649 "io_queue_requests": 512, 00:24:51.649 "keep_alive_timeout_ms": 10000, 00:24:51.649 "low_priority_weight": 0, 00:24:51.649 "medium_priority_weight": 0, 00:24:51.649 "nvme_adminq_poll_period_us": 10000, 00:24:51.649 "nvme_error_stat": false, 00:24:51.649 "nvme_ioq_poll_period_us": 0, 00:24:51.649 "rdma_cm_event_timeout_ms": 0, 00:24:51.649 "rdma_max_cq_size": 0, 00:24:51.649 "rdma_srq_size": 0, 00:24:51.649 "reconnect_delay_sec": 0, 00:24:51.649 "timeout_admin_us": 0, 00:24:51.649 "timeout_us": 0, 00:24:51.649 "transport_ack_timeout": 0, 00:24:51.649 "transport_retry_count": 4, 00:24:51.649 "transport_tos": 0 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_nvme_attach_controller", 00:24:51.649 "params": { 00:24:51.649 "adrfam": "IPv4", 00:24:51.649 "ctrlr_loss_timeout_sec": 0, 00:24:51.649 "ddgst": false, 00:24:51.649 "fast_io_fail_timeout_sec": 0, 00:24:51.649 "hdgst": false, 00:24:51.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.649 "name": "TLSTEST", 00:24:51.649 "prchk_guard": false, 00:24:51.649 "prchk_reftag": false, 00:24:51.649 "psk": "/tmp/tmp.mufboOTJKP", 00:24:51.649 "reconnect_delay_sec": 0, 00:24:51.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.649 "traddr": "10.0.0.2", 00:24:51.649 "trsvcid": "4420", 00:24:51.649 "trtype": "TCP" 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_nvme_set_hotplug", 00:24:51.649 "params": { 00:24:51.649 "enable": false, 00:24:51.649 "period_us": 100000 00:24:51.649 } 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "method": "bdev_wait_for_examine" 00:24:51.649 } 00:24:51.649 ] 00:24:51.649 }, 00:24:51.649 { 00:24:51.649 "subsystem": "nbd", 00:24:51.649 "config": [] 00:24:51.649 } 00:24:51.649 ] 00:24:51.649 }' 00:24:51.908 [2024-04-18 11:15:59.928767] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:51.908 [2024-04-18 11:15:59.928947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80228 ] 00:24:51.908 [2024-04-18 11:16:00.092790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.165 [2024-04-18 11:16:00.334553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.777 [2024-04-18 11:16:00.715670] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.777 [2024-04-18 11:16:00.715850] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:52.777 11:16:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:52.777 11:16:00 -- common/autotest_common.sh@850 -- # return 0 00:24:52.777 11:16:00 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:52.777 Running I/O for 10 seconds... 00:25:04.979 00:25:04.979 Latency(us) 00:25:04.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.979 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:04.979 Verification LBA range: start 0x0 length 0x2000 00:25:04.979 TLSTESTn1 : 10.04 2728.38 10.66 0.00 0.00 46810.57 8757.99 33125.47 00:25:04.979 =================================================================================================================== 00:25:04.979 Total : 2728.38 10.66 0.00 0.00 46810.57 8757.99 33125.47 00:25:04.979 0 00:25:04.979 11:16:11 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.979 11:16:11 -- target/tls.sh@214 -- # killprocess 80228 00:25:04.979 11:16:11 -- common/autotest_common.sh@936 -- # '[' -z 80228 ']' 00:25:04.979 11:16:11 -- common/autotest_common.sh@940 -- # kill -0 80228 00:25:04.979 11:16:11 -- common/autotest_common.sh@941 -- # uname 00:25:04.979 11:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.979 11:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80228 00:25:04.979 killing process with pid 80228 00:25:04.979 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.979 00:25:04.979 Latency(us) 00:25:04.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.979 =================================================================================================================== 00:25:04.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.979 11:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:04.979 11:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:04.979 11:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80228' 00:25:04.979 11:16:11 -- common/autotest_common.sh@955 -- # kill 80228 00:25:04.979 [2024-04-18 11:16:11.068693] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:04.979 11:16:11 -- common/autotest_common.sh@960 -- # wait 80228 00:25:04.979 11:16:12 -- target/tls.sh@215 -- # killprocess 80185 00:25:04.979 11:16:12 -- common/autotest_common.sh@936 -- # '[' -z 80185 ']' 00:25:04.979 11:16:12 -- common/autotest_common.sh@940 -- # kill -0 80185 00:25:04.979 11:16:12 -- common/autotest_common.sh@941 -- # uname 00:25:04.979 11:16:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.979 11:16:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80185 00:25:04.979 killing process with pid 80185 00:25:04.979 11:16:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:04.979 11:16:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:04.979 11:16:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80185' 00:25:04.979 11:16:12 -- common/autotest_common.sh@955 -- # kill 80185 00:25:04.979 [2024-04-18 11:16:12.269447] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:04.979 11:16:12 -- common/autotest_common.sh@960 -- # wait 80185 00:25:05.547 11:16:13 -- target/tls.sh@218 -- # nvmfappstart 00:25:05.547 11:16:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:05.547 11:16:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:05.547 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:25:05.547 11:16:13 -- nvmf/common.sh@470 -- # nvmfpid=80398 00:25:05.547 11:16:13 -- nvmf/common.sh@471 -- # waitforlisten 80398 00:25:05.547 11:16:13 -- common/autotest_common.sh@817 -- # '[' -z 80398 ']' 00:25:05.547 11:16:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:05.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.547 11:16:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.547 11:16:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.547 11:16:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.547 11:16:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.547 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:25:05.547 [2024-04-18 11:16:13.675526] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:05.547 [2024-04-18 11:16:13.675719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.805 [2024-04-18 11:16:13.851178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.064 [2024-04-18 11:16:14.133637] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.064 [2024-04-18 11:16:14.133721] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.064 [2024-04-18 11:16:14.133742] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.064 [2024-04-18 11:16:14.133767] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.064 [2024-04-18 11:16:14.133782] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.064 [2024-04-18 11:16:14.133822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.322 11:16:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.322 11:16:14 -- common/autotest_common.sh@850 -- # return 0 00:25:06.322 11:16:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:06.322 11:16:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.322 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:25:06.581 11:16:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.581 11:16:14 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.mufboOTJKP 00:25:06.581 11:16:14 -- target/tls.sh@49 -- # local key=/tmp/tmp.mufboOTJKP 00:25:06.581 11:16:14 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:06.581 [2024-04-18 11:16:14.782933] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.840 11:16:14 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:06.840 11:16:15 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:07.099 [2024-04-18 11:16:15.283103] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.099 [2024-04-18 11:16:15.283459] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.099 11:16:15 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:07.357 malloc0 00:25:07.615 11:16:15 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:07.874 11:16:15 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mufboOTJKP 00:25:07.874 [2024-04-18 11:16:16.070189] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:08.133 11:16:16 -- target/tls.sh@222 -- # bdevperf_pid=80502 00:25:08.133 11:16:16 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:08.133 11:16:16 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.133 11:16:16 -- target/tls.sh@225 -- # waitforlisten 80502 /var/tmp/bdevperf.sock 00:25:08.133 11:16:16 -- common/autotest_common.sh@817 -- # '[' -z 80502 ']' 00:25:08.133 11:16:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.133 11:16:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:08.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.133 11:16:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.133 11:16:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:08.133 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:25:08.133 [2024-04-18 11:16:16.176529] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:08.133 [2024-04-18 11:16:16.176709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80502 ] 00:25:08.133 [2024-04-18 11:16:16.344228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.700 [2024-04-18 11:16:16.619532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.958 11:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:08.958 11:16:17 -- common/autotest_common.sh@850 -- # return 0 00:25:08.958 11:16:17 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mufboOTJKP 00:25:09.216 11:16:17 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:09.475 [2024-04-18 11:16:17.633933] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.733 nvme0n1 00:25:09.733 11:16:17 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.733 Running I/O for 1 seconds... 00:25:10.673 00:25:10.673 Latency(us) 00:25:10.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:10.673 Verification LBA range: start 0x0 length 0x2000 00:25:10.673 nvme0n1 : 1.03 2759.69 10.78 0.00 0.00 45624.15 2487.39 28716.68 00:25:10.673 =================================================================================================================== 00:25:10.673 Total : 2759.69 10.78 0.00 0.00 45624.15 2487.39 28716.68 00:25:10.673 0 00:25:10.931 11:16:18 -- target/tls.sh@234 -- # killprocess 80502 00:25:10.931 11:16:18 -- common/autotest_common.sh@936 -- # '[' -z 80502 ']' 00:25:10.931 11:16:18 -- common/autotest_common.sh@940 -- # kill -0 80502 00:25:10.931 11:16:18 -- common/autotest_common.sh@941 -- # uname 00:25:10.931 11:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.931 11:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80502 00:25:10.931 killing process with pid 80502 00:25:10.931 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.931 00:25:10.931 Latency(us) 00:25:10.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.931 =================================================================================================================== 00:25:10.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.931 11:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:10.931 11:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:10.931 11:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80502' 00:25:10.931 11:16:18 -- common/autotest_common.sh@955 -- # kill 80502 00:25:10.931 11:16:18 -- common/autotest_common.sh@960 -- # wait 80502 00:25:11.873 11:16:20 -- target/tls.sh@235 -- # killprocess 80398 00:25:11.873 11:16:20 -- common/autotest_common.sh@936 -- # '[' -z 80398 ']' 00:25:11.873 11:16:20 -- common/autotest_common.sh@940 -- # kill -0 80398 00:25:11.873 11:16:20 -- common/autotest_common.sh@941 -- # uname 00:25:11.873 11:16:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.873 11:16:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80398 00:25:12.130 killing process with pid 80398 00:25:12.130 11:16:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:12.130 11:16:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:12.130 11:16:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80398' 00:25:12.130 11:16:20 -- common/autotest_common.sh@955 -- # kill 80398 00:25:12.130 [2024-04-18 11:16:20.097762] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:12.130 11:16:20 -- common/autotest_common.sh@960 -- # wait 80398 00:25:13.519 11:16:21 -- target/tls.sh@238 -- # nvmfappstart 00:25:13.519 11:16:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:13.519 11:16:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:13.519 11:16:21 -- common/autotest_common.sh@10 -- # set +x 00:25:13.519 11:16:21 -- nvmf/common.sh@470 -- # nvmfpid=80601 00:25:13.519 11:16:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:13.519 11:16:21 -- nvmf/common.sh@471 -- # waitforlisten 80601 00:25:13.519 11:16:21 -- common/autotest_common.sh@817 -- # '[' -z 80601 ']' 00:25:13.519 11:16:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.519 11:16:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:13.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.519 11:16:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.519 11:16:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:13.519 11:16:21 -- common/autotest_common.sh@10 -- # set +x 00:25:13.519 [2024-04-18 11:16:21.453637] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:13.519 [2024-04-18 11:16:21.453791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.519 [2024-04-18 11:16:21.617883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.777 [2024-04-18 11:16:21.898958] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.777 [2024-04-18 11:16:21.899053] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.777 [2024-04-18 11:16:21.899091] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.777 [2024-04-18 11:16:21.899117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.777 [2024-04-18 11:16:21.899151] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.777 [2024-04-18 11:16:21.899194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.342 11:16:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:14.342 11:16:22 -- common/autotest_common.sh@850 -- # return 0 00:25:14.342 11:16:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:14.342 11:16:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.342 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:25:14.342 11:16:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.342 11:16:22 -- target/tls.sh@239 -- # rpc_cmd 00:25:14.342 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.342 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:25:14.342 [2024-04-18 11:16:22.461478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.342 malloc0 00:25:14.342 [2024-04-18 11:16:22.519907] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.342 [2024-04-18 11:16:22.520382] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.342 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.343 11:16:22 -- target/tls.sh@252 -- # bdevperf_pid=80652 00:25:14.343 11:16:22 -- target/tls.sh@254 -- # waitforlisten 80652 /var/tmp/bdevperf.sock 00:25:14.343 11:16:22 -- common/autotest_common.sh@817 -- # '[' -z 80652 ']' 00:25:14.343 11:16:22 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:14.343 11:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.343 11:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:14.343 11:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.343 11:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:14.343 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:25:14.600 [2024-04-18 11:16:22.646475] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:14.600 [2024-04-18 11:16:22.646707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80652 ] 00:25:14.600 [2024-04-18 11:16:22.814997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.858 [2024-04-18 11:16:23.056058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.423 11:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.423 11:16:23 -- common/autotest_common.sh@850 -- # return 0 00:25:15.423 11:16:23 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mufboOTJKP 00:25:15.682 11:16:23 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:15.939 [2024-04-18 11:16:24.073572] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.939 nvme0n1 00:25:16.197 11:16:24 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.197 Running I/O for 1 seconds... 00:25:17.132 00:25:17.132 Latency(us) 00:25:17.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.132 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.132 Verification LBA range: start 0x0 length 0x2000 00:25:17.132 nvme0n1 : 1.04 2669.67 10.43 0.00 0.00 47045.85 9889.98 28955.00 00:25:17.132 =================================================================================================================== 00:25:17.132 Total : 2669.67 10.43 0.00 0.00 47045.85 9889.98 28955.00 00:25:17.132 0 00:25:17.132 11:16:25 -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:17.132 11:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.132 11:16:25 -- common/autotest_common.sh@10 -- # set +x 00:25:17.390 11:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.390 11:16:25 -- target/tls.sh@263 -- # tgtcfg='{ 00:25:17.390 "subsystems": [ 00:25:17.390 { 00:25:17.390 "subsystem": "keyring", 00:25:17.390 "config": [ 00:25:17.390 { 00:25:17.390 "method": "keyring_file_add_key", 00:25:17.390 "params": { 00:25:17.390 "name": "key0", 00:25:17.390 "path": "/tmp/tmp.mufboOTJKP" 00:25:17.390 } 00:25:17.390 } 00:25:17.390 ] 00:25:17.390 }, 00:25:17.390 { 00:25:17.390 "subsystem": "iobuf", 00:25:17.390 "config": [ 00:25:17.390 { 00:25:17.390 "method": "iobuf_set_options", 00:25:17.390 "params": { 00:25:17.390 "large_bufsize": 135168, 00:25:17.390 "large_pool_count": 1024, 00:25:17.390 "small_bufsize": 8192, 00:25:17.390 "small_pool_count": 8192 00:25:17.390 } 00:25:17.390 } 00:25:17.390 ] 00:25:17.390 }, 00:25:17.390 { 00:25:17.390 "subsystem": "sock", 00:25:17.390 "config": [ 00:25:17.390 { 00:25:17.390 "method": "sock_impl_set_options", 00:25:17.390 "params": { 00:25:17.390 "enable_ktls": false, 00:25:17.390 "enable_placement_id": 0, 00:25:17.391 "enable_quickack": false, 00:25:17.391 "enable_recv_pipe": true, 00:25:17.391 "enable_zerocopy_send_client": false, 00:25:17.391 "enable_zerocopy_send_server": true, 00:25:17.391 "impl_name": "posix", 00:25:17.391 "recv_buf_size": 2097152, 00:25:17.391 "send_buf_size": 2097152, 00:25:17.391 "tls_version": 0, 00:25:17.391 "zerocopy_threshold": 0 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "sock_impl_set_options", 00:25:17.391 "params": { 00:25:17.391 "enable_ktls": false, 00:25:17.391 "enable_placement_id": 0, 00:25:17.391 "enable_quickack": false, 00:25:17.391 "enable_recv_pipe": true, 00:25:17.391 "enable_zerocopy_send_client": false, 00:25:17.391 "enable_zerocopy_send_server": true, 00:25:17.391 "impl_name": "ssl", 00:25:17.391 "recv_buf_size": 4096, 00:25:17.391 "send_buf_size": 4096, 00:25:17.391 "tls_version": 0, 00:25:17.391 "zerocopy_threshold": 0 00:25:17.391 } 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "vmd", 00:25:17.391 "config": [] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "accel", 00:25:17.391 "config": [ 00:25:17.391 { 00:25:17.391 "method": "accel_set_options", 00:25:17.391 "params": { 00:25:17.391 "buf_count": 2048, 00:25:17.391 "large_cache_size": 16, 00:25:17.391 "sequence_count": 2048, 00:25:17.391 "small_cache_size": 128, 00:25:17.391 "task_count": 2048 00:25:17.391 } 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "bdev", 00:25:17.391 "config": [ 00:25:17.391 { 00:25:17.391 "method": "bdev_set_options", 00:25:17.391 "params": { 00:25:17.391 "bdev_auto_examine": true, 00:25:17.391 "bdev_io_cache_size": 256, 00:25:17.391 "bdev_io_pool_size": 65535, 00:25:17.391 "iobuf_large_cache_size": 16, 00:25:17.391 "iobuf_small_cache_size": 128 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_raid_set_options", 00:25:17.391 "params": { 00:25:17.391 "process_window_size_kb": 1024 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_iscsi_set_options", 00:25:17.391 "params": { 00:25:17.391 "timeout_sec": 30 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_nvme_set_options", 00:25:17.391 "params": { 00:25:17.391 "action_on_timeout": "none", 00:25:17.391 "allow_accel_sequence": false, 00:25:17.391 "arbitration_burst": 0, 00:25:17.391 "bdev_retry_count": 3, 00:25:17.391 "ctrlr_loss_timeout_sec": 0, 00:25:17.391 "delay_cmd_submit": true, 00:25:17.391 "dhchap_dhgroups": [ 00:25:17.391 "null", 00:25:17.391 "ffdhe2048", 00:25:17.391 "ffdhe3072", 00:25:17.391 "ffdhe4096", 00:25:17.391 "ffdhe6144", 00:25:17.391 "ffdhe8192" 00:25:17.391 ], 00:25:17.391 "dhchap_digests": [ 00:25:17.391 "sha256", 00:25:17.391 "sha384", 00:25:17.391 "sha512" 00:25:17.391 ], 00:25:17.391 "disable_auto_failback": false, 00:25:17.391 "fast_io_fail_timeout_sec": 0, 00:25:17.391 "generate_uuids": false, 00:25:17.391 "high_priority_weight": 0, 00:25:17.391 "io_path_stat": false, 00:25:17.391 "io_queue_requests": 0, 00:25:17.391 "keep_alive_timeout_ms": 10000, 00:25:17.391 "low_priority_weight": 0, 00:25:17.391 "medium_priority_weight": 0, 00:25:17.391 "nvme_adminq_poll_period_us": 10000, 00:25:17.391 "nvme_error_stat": false, 00:25:17.391 "nvme_ioq_poll_period_us": 0, 00:25:17.391 "rdma_cm_event_timeout_ms": 0, 00:25:17.391 "rdma_max_cq_size": 0, 00:25:17.391 "rdma_srq_size": 0, 00:25:17.391 "reconnect_delay_sec": 0, 00:25:17.391 "timeout_admin_us": 0, 00:25:17.391 "timeout_us": 0, 00:25:17.391 "transport_ack_timeout": 0, 00:25:17.391 "transport_retry_count": 4, 00:25:17.391 "transport_tos": 0 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_nvme_set_hotplug", 00:25:17.391 "params": { 00:25:17.391 "enable": false, 00:25:17.391 "period_us": 100000 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_malloc_create", 00:25:17.391 "params": { 00:25:17.391 "block_size": 4096, 00:25:17.391 "name": "malloc0", 00:25:17.391 "num_blocks": 8192, 00:25:17.391 "optimal_io_boundary": 0, 00:25:17.391 "physical_block_size": 4096, 00:25:17.391 "uuid": "296390d0-a4b7-453e-8d7f-e0597cd5fbc0" 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "bdev_wait_for_examine" 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "nbd", 00:25:17.391 "config": [] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "scheduler", 00:25:17.391 "config": [ 00:25:17.391 { 00:25:17.391 "method": "framework_set_scheduler", 00:25:17.391 "params": { 00:25:17.391 "name": "static" 00:25:17.391 } 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "subsystem": "nvmf", 00:25:17.391 "config": [ 00:25:17.391 { 00:25:17.391 "method": "nvmf_set_config", 00:25:17.391 "params": { 00:25:17.391 "admin_cmd_passthru": { 00:25:17.391 "identify_ctrlr": false 00:25:17.391 }, 00:25:17.391 "discovery_filter": "match_any" 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_set_max_subsystems", 00:25:17.391 "params": { 00:25:17.391 "max_subsystems": 1024 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_set_crdt", 00:25:17.391 "params": { 00:25:17.391 "crdt1": 0, 00:25:17.391 "crdt2": 0, 00:25:17.391 "crdt3": 0 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_create_transport", 00:25:17.391 "params": { 00:25:17.391 "abort_timeout_sec": 1, 00:25:17.391 "ack_timeout": 0, 00:25:17.391 "buf_cache_size": 4294967295, 00:25:17.391 "c2h_success": false, 00:25:17.391 "dif_insert_or_strip": false, 00:25:17.391 "in_capsule_data_size": 4096, 00:25:17.391 "io_unit_size": 131072, 00:25:17.391 "max_aq_depth": 128, 00:25:17.391 "max_io_qpairs_per_ctrlr": 127, 00:25:17.391 "max_io_size": 131072, 00:25:17.391 "max_queue_depth": 128, 00:25:17.391 "num_shared_buffers": 511, 00:25:17.391 "sock_priority": 0, 00:25:17.391 "trtype": "TCP", 00:25:17.391 "zcopy": false 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_create_subsystem", 00:25:17.391 "params": { 00:25:17.391 "allow_any_host": false, 00:25:17.391 "ana_reporting": false, 00:25:17.391 "max_cntlid": 65519, 00:25:17.391 "max_namespaces": 32, 00:25:17.391 "min_cntlid": 1, 00:25:17.391 "model_number": "SPDK bdev Controller", 00:25:17.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.391 "serial_number": "00000000000000000000" 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_subsystem_add_host", 00:25:17.391 "params": { 00:25:17.391 "host": "nqn.2016-06.io.spdk:host1", 00:25:17.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.391 "psk": "key0" 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_subsystem_add_ns", 00:25:17.391 "params": { 00:25:17.391 "namespace": { 00:25:17.391 "bdev_name": "malloc0", 00:25:17.391 "nguid": "296390D0A4B7453E8D7FE0597CD5FBC0", 00:25:17.391 "no_auto_visible": false, 00:25:17.391 "nsid": 1, 00:25:17.391 "uuid": "296390d0-a4b7-453e-8d7f-e0597cd5fbc0" 00:25:17.391 }, 00:25:17.391 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:17.391 } 00:25:17.391 }, 00:25:17.391 { 00:25:17.391 "method": "nvmf_subsystem_add_listener", 00:25:17.391 "params": { 00:25:17.391 "listen_address": { 00:25:17.391 "adrfam": "IPv4", 00:25:17.391 "traddr": "10.0.0.2", 00:25:17.391 "trsvcid": "4420", 00:25:17.391 "trtype": "TCP" 00:25:17.391 }, 00:25:17.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.391 "secure_channel": true 00:25:17.391 } 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 } 00:25:17.391 ] 00:25:17.391 }' 00:25:17.391 11:16:25 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:17.650 11:16:25 -- target/tls.sh@264 -- # bperfcfg='{ 00:25:17.650 "subsystems": [ 00:25:17.650 { 00:25:17.650 "subsystem": "keyring", 00:25:17.650 "config": [ 00:25:17.650 { 00:25:17.650 "method": "keyring_file_add_key", 00:25:17.650 "params": { 00:25:17.650 "name": "key0", 00:25:17.650 "path": "/tmp/tmp.mufboOTJKP" 00:25:17.650 } 00:25:17.650 } 00:25:17.650 ] 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "subsystem": "iobuf", 00:25:17.650 "config": [ 00:25:17.650 { 00:25:17.650 "method": "iobuf_set_options", 00:25:17.650 "params": { 00:25:17.650 "large_bufsize": 135168, 00:25:17.650 "large_pool_count": 1024, 00:25:17.650 "small_bufsize": 8192, 00:25:17.650 "small_pool_count": 8192 00:25:17.650 } 00:25:17.650 } 00:25:17.650 ] 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "subsystem": "sock", 00:25:17.650 "config": [ 00:25:17.650 { 00:25:17.650 "method": "sock_impl_set_options", 00:25:17.650 "params": { 00:25:17.650 "enable_ktls": false, 00:25:17.650 "enable_placement_id": 0, 00:25:17.650 "enable_quickack": false, 00:25:17.650 "enable_recv_pipe": true, 00:25:17.650 "enable_zerocopy_send_client": false, 00:25:17.650 "enable_zerocopy_send_server": true, 00:25:17.650 "impl_name": "posix", 00:25:17.650 "recv_buf_size": 2097152, 00:25:17.650 "send_buf_size": 2097152, 00:25:17.650 "tls_version": 0, 00:25:17.650 "zerocopy_threshold": 0 00:25:17.650 } 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "method": "sock_impl_set_options", 00:25:17.650 "params": { 00:25:17.650 "enable_ktls": false, 00:25:17.650 "enable_placement_id": 0, 00:25:17.650 "enable_quickack": false, 00:25:17.650 "enable_recv_pipe": true, 00:25:17.650 "enable_zerocopy_send_client": false, 00:25:17.650 "enable_zerocopy_send_server": true, 00:25:17.650 "impl_name": "ssl", 00:25:17.650 "recv_buf_size": 4096, 00:25:17.650 "send_buf_size": 4096, 00:25:17.650 "tls_version": 0, 00:25:17.650 "zerocopy_threshold": 0 00:25:17.650 } 00:25:17.650 } 00:25:17.650 ] 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "subsystem": "vmd", 00:25:17.650 "config": [] 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "subsystem": "accel", 00:25:17.650 "config": [ 00:25:17.650 { 00:25:17.650 "method": "accel_set_options", 00:25:17.650 "params": { 00:25:17.650 "buf_count": 2048, 00:25:17.650 "large_cache_size": 16, 00:25:17.650 "sequence_count": 2048, 00:25:17.650 "small_cache_size": 128, 00:25:17.650 "task_count": 2048 00:25:17.650 } 00:25:17.650 } 00:25:17.650 ] 00:25:17.650 }, 00:25:17.650 { 00:25:17.650 "subsystem": "bdev", 00:25:17.650 "config": [ 00:25:17.650 { 00:25:17.650 "method": "bdev_set_options", 00:25:17.651 "params": { 00:25:17.651 "bdev_auto_examine": true, 00:25:17.651 "bdev_io_cache_size": 256, 00:25:17.651 "bdev_io_pool_size": 65535, 00:25:17.651 "iobuf_large_cache_size": 16, 00:25:17.651 "iobuf_small_cache_size": 128 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_raid_set_options", 00:25:17.651 "params": { 00:25:17.651 "process_window_size_kb": 1024 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_iscsi_set_options", 00:25:17.651 "params": { 00:25:17.651 "timeout_sec": 30 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_nvme_set_options", 00:25:17.651 "params": { 00:25:17.651 "action_on_timeout": "none", 00:25:17.651 "allow_accel_sequence": false, 00:25:17.651 "arbitration_burst": 0, 00:25:17.651 "bdev_retry_count": 3, 00:25:17.651 "ctrlr_loss_timeout_sec": 0, 00:25:17.651 "delay_cmd_submit": true, 00:25:17.651 "dhchap_dhgroups": [ 00:25:17.651 "null", 00:25:17.651 "ffdhe2048", 00:25:17.651 "ffdhe3072", 00:25:17.651 "ffdhe4096", 00:25:17.651 "ffdhe6144", 00:25:17.651 "ffdhe8192" 00:25:17.651 ], 00:25:17.651 "dhchap_digests": [ 00:25:17.651 "sha256", 00:25:17.651 "sha384", 00:25:17.651 "sha512" 00:25:17.651 ], 00:25:17.651 "disable_auto_failback": false, 00:25:17.651 "fast_io_fail_timeout_sec": 0, 00:25:17.651 "generate_uuids": false, 00:25:17.651 "high_priority_weight": 0, 00:25:17.651 "io_path_stat": false, 00:25:17.651 "io_queue_requests": 512, 00:25:17.651 "keep_alive_timeout_ms": 10000, 00:25:17.651 "low_priority_weight": 0, 00:25:17.651 "medium_priority_weight": 0, 00:25:17.651 "nvme_adminq_poll_period_us": 10000, 00:25:17.651 "nvme_error_stat": false, 00:25:17.651 "nvme_ioq_poll_period_us": 0, 00:25:17.651 "rdma_cm_event_timeout_ms": 0, 00:25:17.651 "rdma_max_cq_size": 0, 00:25:17.651 "rdma_srq_size": 0, 00:25:17.651 "reconnect_delay_sec": 0, 00:25:17.651 "timeout_admin_us": 0, 00:25:17.651 "timeout_us": 0, 00:25:17.651 "transport_ack_timeout": 0, 00:25:17.651 "transport_retry_count": 4, 00:25:17.651 "transport_tos": 0 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_nvme_attach_controller", 00:25:17.651 "params": { 00:25:17.651 "adrfam": "IPv4", 00:25:17.651 "ctrlr_loss_timeout_sec": 0, 00:25:17.651 "ddgst": false, 00:25:17.651 "fast_io_fail_timeout_sec": 0, 00:25:17.651 "hdgst": false, 00:25:17.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.651 "name": "nvme0", 00:25:17.651 "prchk_guard": false, 00:25:17.651 "prchk_reftag": false, 00:25:17.651 "psk": "key0", 00:25:17.651 "reconnect_delay_sec": 0, 00:25:17.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.651 "traddr": "10.0.0.2", 00:25:17.651 "trsvcid": "4420", 00:25:17.651 "trtype": "TCP" 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_nvme_set_hotplug", 00:25:17.651 "params": { 00:25:17.651 "enable": false, 00:25:17.651 "period_us": 100000 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_enable_histogram", 00:25:17.651 "params": { 00:25:17.651 "enable": true, 00:25:17.651 "name": "nvme0n1" 00:25:17.651 } 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "method": "bdev_wait_for_examine" 00:25:17.651 } 00:25:17.651 ] 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "subsystem": "nbd", 00:25:17.651 "config": [] 00:25:17.651 } 00:25:17.651 ] 00:25:17.651 }' 00:25:17.651 11:16:25 -- target/tls.sh@266 -- # killprocess 80652 00:25:17.651 11:16:25 -- common/autotest_common.sh@936 -- # '[' -z 80652 ']' 00:25:17.651 11:16:25 -- common/autotest_common.sh@940 -- # kill -0 80652 00:25:17.651 11:16:25 -- common/autotest_common.sh@941 -- # uname 00:25:17.651 11:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.651 11:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80652 00:25:17.651 11:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:17.651 11:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:17.651 11:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80652' 00:25:17.651 killing process with pid 80652 00:25:17.651 Received shutdown signal, test time was about 1.000000 seconds 00:25:17.651 00:25:17.651 Latency(us) 00:25:17.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.651 =================================================================================================================== 00:25:17.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.651 11:16:25 -- common/autotest_common.sh@955 -- # kill 80652 00:25:17.651 11:16:25 -- common/autotest_common.sh@960 -- # wait 80652 00:25:19.027 11:16:27 -- target/tls.sh@267 -- # killprocess 80601 00:25:19.027 11:16:27 -- common/autotest_common.sh@936 -- # '[' -z 80601 ']' 00:25:19.027 11:16:27 -- common/autotest_common.sh@940 -- # kill -0 80601 00:25:19.027 11:16:27 -- common/autotest_common.sh@941 -- # uname 00:25:19.027 11:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.027 11:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80601 00:25:19.027 11:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.027 11:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.027 killing process with pid 80601 00:25:19.027 11:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80601' 00:25:19.027 11:16:27 -- common/autotest_common.sh@955 -- # kill 80601 00:25:19.027 11:16:27 -- common/autotest_common.sh@960 -- # wait 80601 00:25:20.444 11:16:28 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:20.444 11:16:28 -- target/tls.sh@269 -- # echo '{ 00:25:20.444 "subsystems": [ 00:25:20.444 { 00:25:20.444 "subsystem": "keyring", 00:25:20.444 "config": [ 00:25:20.444 { 00:25:20.444 "method": "keyring_file_add_key", 00:25:20.444 "params": { 00:25:20.444 "name": "key0", 00:25:20.444 "path": "/tmp/tmp.mufboOTJKP" 00:25:20.444 } 00:25:20.444 } 00:25:20.444 ] 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "subsystem": "iobuf", 00:25:20.444 "config": [ 00:25:20.444 { 00:25:20.444 "method": "iobuf_set_options", 00:25:20.444 "params": { 00:25:20.444 "large_bufsize": 135168, 00:25:20.444 "large_pool_count": 1024, 00:25:20.444 "small_bufsize": 8192, 00:25:20.444 "small_pool_count": 8192 00:25:20.444 } 00:25:20.444 } 00:25:20.444 ] 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "subsystem": "sock", 00:25:20.444 "config": [ 00:25:20.444 { 00:25:20.444 "method": "sock_impl_set_options", 00:25:20.444 "params": { 00:25:20.444 "enable_ktls": false, 00:25:20.444 "enable_placement_id": 0, 00:25:20.444 "enable_quickack": false, 00:25:20.444 "enable_recv_pipe": true, 00:25:20.444 "enable_zerocopy_send_client": false, 00:25:20.444 "enable_zerocopy_send_server": true, 00:25:20.444 "impl_name": "posix", 00:25:20.444 "recv_buf_size": 2097152, 00:25:20.444 "send_buf_size": 2097152, 00:25:20.444 "tls_version": 0, 00:25:20.444 "zerocopy_threshold": 0 00:25:20.444 } 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "method": "sock_impl_set_options", 00:25:20.444 "params": { 00:25:20.444 "enable_ktls": false, 00:25:20.444 "enable_placement_id": 0, 00:25:20.444 "enable_quickack": false, 00:25:20.444 "enable_recv_pipe": true, 00:25:20.444 "enable_zerocopy_send_client": false, 00:25:20.444 "enable_zerocopy_send_server": true, 00:25:20.444 "impl_name": "ssl", 00:25:20.444 "recv_buf_size": 4096, 00:25:20.444 "send_buf_size": 4096, 00:25:20.444 "tls_version": 0, 00:25:20.444 "zerocopy_threshold": 0 00:25:20.444 } 00:25:20.444 } 00:25:20.444 ] 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "subsystem": "vmd", 00:25:20.444 "config": [] 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "subsystem": "accel", 00:25:20.444 "config": [ 00:25:20.444 { 00:25:20.444 "method": "accel_set_options", 00:25:20.444 "params": { 00:25:20.444 "buf_count": 2048, 00:25:20.444 "large_cache_size": 16, 00:25:20.444 "sequence_count": 2048, 00:25:20.444 "small_cache_size": 128, 00:25:20.444 "task_count": 2048 00:25:20.444 } 00:25:20.444 } 00:25:20.444 ] 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "subsystem": "bdev", 00:25:20.444 "config": [ 00:25:20.444 { 00:25:20.444 "method": "bdev_set_options", 00:25:20.444 "params": { 00:25:20.444 "bdev_auto_examine": true, 00:25:20.444 "bdev_io_cache_size": 256, 00:25:20.444 "bdev_io_pool_size": 65535, 00:25:20.444 "iobuf_large_cache_size": 16, 00:25:20.444 "iobuf_small_cache_size": 128 00:25:20.444 } 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "method": "bdev_raid_set_options", 00:25:20.444 "params": { 00:25:20.444 "process_window_size_kb": 1024 00:25:20.444 } 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "method": "bdev_iscsi_set_options", 00:25:20.444 "params": { 00:25:20.444 "timeout_sec": 30 00:25:20.444 } 00:25:20.444 }, 00:25:20.444 { 00:25:20.444 "method": "bdev_nvme_set_options", 00:25:20.444 "params": { 00:25:20.444 "action_on_timeout": "none", 00:25:20.444 "allow_accel_sequence": false, 00:25:20.444 "arbitration_burst": 0, 00:25:20.444 "bdev_retry_count": 3, 00:25:20.444 "ctrlr_loss_timeout_sec": 0, 00:25:20.444 "delay_cmd_submit": true, 00:25:20.444 "dhchap_dhgroups": [ 00:25:20.444 "null", 00:25:20.444 "ffdhe2048", 00:25:20.444 "ffdhe3072", 00:25:20.444 "ffdhe4096", 00:25:20.444 "ffdhe6144", 00:25:20.444 "ffdhe8192" 00:25:20.444 ], 00:25:20.444 "dhchap_digests": [ 00:25:20.444 "sha256", 00:25:20.444 "sha384", 00:25:20.444 "sha512" 00:25:20.444 ], 00:25:20.445 "disable_auto_failback": false, 00:25:20.445 "fast_io_fail_timeout_sec": 0, 00:25:20.445 "generate_uuids": false, 00:25:20.445 "high_priority_weight": 0, 00:25:20.445 "io_path_stat": false, 00:25:20.445 "io_queue_requests": 0, 00:25:20.445 "keep_alive_timeout_ms": 10000, 00:25:20.445 "low_priority_weight": 0, 00:25:20.445 "medium_priority_weight": 0, 00:25:20.445 "nvme_adminq_poll_period_us": 10000, 00:25:20.445 "nvme_error_stat": false, 00:25:20.445 "nvme_ioq_poll_period_us": 0, 00:25:20.445 "rdma_cm_event_timeout_ms": 0, 00:25:20.445 "rdma_max_cq_size": 0, 00:25:20.445 "rdma_srq_size": 0, 00:25:20.445 "reconnect_delay_sec": 0, 00:25:20.445 "timeout_admin_us": 0, 00:25:20.445 "timeout_us": 0, 00:25:20.445 "transport_ack_timeout": 0, 00:25:20.445 "transport_retry_count": 4, 00:25:20.445 "transport_tos": 0 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "bdev_nvme_set_hotplug", 00:25:20.445 "params": { 00:25:20.445 "enable": false, 00:25:20.445 "period_us": 100000 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "bdev_malloc_create", 00:25:20.445 "params": { 00:25:20.445 "block_size": 4096, 00:25:20.445 "name": "malloc0", 00:25:20.445 "num_blocks": 8192, 00:25:20.445 "optimal_io_boundary": 0, 00:25:20.445 "physical_block_size": 4096, 00:25:20.445 "uuid": "296390d0-a4b7-453e-8d7f-e0597cd5fbc0" 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "bdev_wait_for_examine" 00:25:20.445 } 00:25:20.445 ] 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "subsystem": "nbd", 00:25:20.445 "config": [] 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "subsystem": "scheduler", 00:25:20.445 "config": [ 00:25:20.445 { 00:25:20.445 "method": "framework_set_scheduler", 00:25:20.445 "params": { 00:25:20.445 "name": "static" 00:25:20.445 } 00:25:20.445 } 00:25:20.445 ] 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "subsystem": "nvmf", 00:25:20.445 "config": [ 00:25:20.445 { 00:25:20.445 "method": "nvmf_set_config", 00:25:20.445 "params": { 00:25:20.445 "admin_cmd_passthru": { 00:25:20.445 "identify_ctrlr": false 00:25:20.445 }, 00:25:20.445 "discovery_filter": "match_any" 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_set_max_subsystems", 00:25:20.445 "params": { 00:25:20.445 "max_subsystems": 1024 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_set_crdt", 00:25:20.445 "params": { 00:25:20.445 "crdt1": 0, 00:25:20.445 "crdt2": 0, 00:25:20.445 "crdt3": 0 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_create_transport", 00:25:20.445 "params": { 00:25:20.445 "abort_timeout_sec": 1, 00:25:20.445 "ack_timeout": 0, 00:25:20.445 "buf_cache_size": 4294967295, 00:25:20.445 "c2h_success": false, 00:25:20.445 "dif_insert_or_strip": false, 00:25:20.445 "in_capsule_data_size": 4096, 00:25:20.445 "io_unit_size": 131072, 00:25:20.445 "max_aq_depth": 128, 00:25:20.445 "max_io_qpairs_per_ctrlr": 127, 00:25:20.445 "max_io_size": 131072, 00:25:20.445 "max_queue_depth": 128, 00:25:20.445 "num_shared_buffers": 511, 00:25:20.445 "sock_priority": 0, 00:25:20.445 "trtype": "TCP", 00:25:20.445 "zcopy": false 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_create_subsystem", 00:25:20.445 "params": { 00:25:20.445 "allow_any_host": false, 00:25:20.445 "ana_reporting": false, 00:25:20.445 "max_cntlid": 65519, 00:25:20.445 "max_namespaces": 32, 00:25:20.445 "min_cntlid": 1, 00:25:20.445 "model_number": "SPDK bdev Controller", 00:25:20.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.445 "serial_number": "00000000000000000000" 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_subsystem_add_host", 00:25:20.445 "params": { 00:25:20.445 "host": "nqn.2016-06.io.spdk:host1", 00:25:20.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.445 "psk": "key0" 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_subsystem_add_ns", 00:25:20.445 "params": { 00:25:20.445 "namespace": { 00:25:20.445 "bdev_name": "malloc0", 00:25:20.445 "nguid": "296390D0A4B7453E8D7FE0597CD5FBC0", 00:25:20.445 "no_auto_visible": false, 00:25:20.445 "nsid": 1, 00:25:20.445 "uuid": "296390d0-a4b7-453e-8d7f-e0597cd5fbc0" 00:25:20.445 }, 00:25:20.445 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:20.445 } 00:25:20.445 }, 00:25:20.445 { 00:25:20.445 "method": "nvmf_subsystem_add_listener", 00:25:20.445 "params": { 00:25:20.445 "listen_address": { 00:25:20.445 "adrfam": "IPv4", 00:25:20.445 "traddr": "10.0.0.2", 00:25:20.445 "trsvcid": "4420", 00:25:20.445 "trtype": "TCP" 00:25:20.445 }, 00:25:20.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.445 "secure_channel": true 00:25:20.445 } 00:25:20.445 } 00:25:20.445 ] 00:25:20.445 } 00:25:20.445 ] 00:25:20.445 }' 00:25:20.445 11:16:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:20.445 11:16:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:20.445 11:16:28 -- common/autotest_common.sh@10 -- # set +x 00:25:20.445 11:16:28 -- nvmf/common.sh@470 -- # nvmfpid=80768 00:25:20.445 11:16:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:20.445 11:16:28 -- nvmf/common.sh@471 -- # waitforlisten 80768 00:25:20.445 11:16:28 -- common/autotest_common.sh@817 -- # '[' -z 80768 ']' 00:25:20.445 11:16:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.445 11:16:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.445 11:16:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.445 11:16:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.445 11:16:28 -- common/autotest_common.sh@10 -- # set +x 00:25:20.445 [2024-04-18 11:16:28.391680] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:20.445 [2024-04-18 11:16:28.391831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.445 [2024-04-18 11:16:28.564421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.704 [2024-04-18 11:16:28.828843] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.704 [2024-04-18 11:16:28.828912] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.704 [2024-04-18 11:16:28.828948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.704 [2024-04-18 11:16:28.828972] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.704 [2024-04-18 11:16:28.828993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.704 [2024-04-18 11:16:28.829186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.271 [2024-04-18 11:16:29.322606] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.271 [2024-04-18 11:16:29.354493] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.271 [2024-04-18 11:16:29.354774] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.271 11:16:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.271 11:16:29 -- common/autotest_common.sh@850 -- # return 0 00:25:21.271 11:16:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:21.271 11:16:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:21.271 11:16:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.271 11:16:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.271 11:16:29 -- target/tls.sh@272 -- # bdevperf_pid=80815 00:25:21.271 11:16:29 -- target/tls.sh@273 -- # waitforlisten 80815 /var/tmp/bdevperf.sock 00:25:21.271 11:16:29 -- common/autotest_common.sh@817 -- # '[' -z 80815 ']' 00:25:21.271 11:16:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.271 11:16:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.271 11:16:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.271 11:16:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.271 11:16:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.271 11:16:29 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:21.271 11:16:29 -- target/tls.sh@270 -- # echo '{ 00:25:21.271 "subsystems": [ 00:25:21.271 { 00:25:21.271 "subsystem": "keyring", 00:25:21.271 "config": [ 00:25:21.271 { 00:25:21.271 "method": "keyring_file_add_key", 00:25:21.271 "params": { 00:25:21.271 "name": "key0", 00:25:21.271 "path": "/tmp/tmp.mufboOTJKP" 00:25:21.271 } 00:25:21.271 } 00:25:21.271 ] 00:25:21.271 }, 00:25:21.271 { 00:25:21.271 "subsystem": "iobuf", 00:25:21.271 "config": [ 00:25:21.271 { 00:25:21.271 "method": "iobuf_set_options", 00:25:21.271 "params": { 00:25:21.271 "large_bufsize": 135168, 00:25:21.271 "large_pool_count": 1024, 00:25:21.271 "small_bufsize": 8192, 00:25:21.271 "small_pool_count": 8192 00:25:21.271 } 00:25:21.271 } 00:25:21.271 ] 00:25:21.271 }, 00:25:21.271 { 00:25:21.271 "subsystem": "sock", 00:25:21.271 "config": [ 00:25:21.271 { 00:25:21.271 "method": "sock_impl_set_options", 00:25:21.271 "params": { 00:25:21.271 "enable_ktls": false, 00:25:21.271 "enable_placement_id": 0, 00:25:21.271 "enable_quickack": false, 00:25:21.271 "enable_recv_pipe": true, 00:25:21.271 "enable_zerocopy_send_client": false, 00:25:21.271 "enable_zerocopy_send_server": true, 00:25:21.271 "impl_name": "posix", 00:25:21.271 "recv_buf_size": 2097152, 00:25:21.271 "send_buf_size": 2097152, 00:25:21.271 "tls_version": 0, 00:25:21.271 "zerocopy_threshold": 0 00:25:21.271 } 00:25:21.271 }, 00:25:21.271 { 00:25:21.271 "method": "sock_impl_set_options", 00:25:21.271 "params": { 00:25:21.271 "enable_ktls": false, 00:25:21.271 "enable_placement_id": 0, 00:25:21.271 "enable_quickack": false, 00:25:21.271 "enable_recv_pipe": true, 00:25:21.271 "enable_zerocopy_send_client": false, 00:25:21.271 "enable_zerocopy_send_server": true, 00:25:21.271 "impl_name": "ssl", 00:25:21.271 "recv_buf_size": 4096, 00:25:21.271 "send_buf_size": 4096, 00:25:21.271 "tls_version": 0, 00:25:21.271 "zerocopy_threshold": 0 00:25:21.271 } 00:25:21.271 } 00:25:21.271 ] 00:25:21.271 }, 00:25:21.271 { 00:25:21.271 "subsystem": "vmd", 00:25:21.272 "config": [] 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "subsystem": "accel", 00:25:21.272 "config": [ 00:25:21.272 { 00:25:21.272 "method": "accel_set_options", 00:25:21.272 "params": { 00:25:21.272 "buf_count": 2048, 00:25:21.272 "large_cache_size": 16, 00:25:21.272 "sequence_count": 2048, 00:25:21.272 "small_cache_size": 128, 00:25:21.272 "task_count": 2048 00:25:21.272 } 00:25:21.272 } 00:25:21.272 ] 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "subsystem": "bdev", 00:25:21.272 "config": [ 00:25:21.272 { 00:25:21.272 "method": "bdev_set_options", 00:25:21.272 "params": { 00:25:21.272 "bdev_auto_examine": true, 00:25:21.272 "bdev_io_cache_size": 256, 00:25:21.272 "bdev_io_pool_size": 65535, 00:25:21.272 "iobuf_large_cache_size": 16, 00:25:21.272 "iobuf_small_cache_size": 128 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_raid_set_options", 00:25:21.272 "params": { 00:25:21.272 "process_window_size_kb": 1024 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_iscsi_set_options", 00:25:21.272 "params": { 00:25:21.272 "timeout_sec": 30 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_nvme_set_options", 00:25:21.272 "params": { 00:25:21.272 "action_on_timeout": "none", 00:25:21.272 "allow_accel_sequence": false, 00:25:21.272 "arbitration_burst": 0, 00:25:21.272 "bdev_retry_count": 3, 00:25:21.272 "ctrlr_loss_timeout_sec": 0, 00:25:21.272 "delay_cmd_submit": true, 00:25:21.272 "dhchap_dhgroups": [ 00:25:21.272 "null", 00:25:21.272 "ffdhe2048", 00:25:21.272 "ffdhe3072", 00:25:21.272 "ffdhe4096", 00:25:21.272 "ffdhe6144", 00:25:21.272 "ffdhe8192" 00:25:21.272 ], 00:25:21.272 "dhchap_digests": [ 00:25:21.272 "sha256", 00:25:21.272 "sha384", 00:25:21.272 "sha512" 00:25:21.272 ], 00:25:21.272 "disable_auto_failback": false, 00:25:21.272 "fast_io_fail_timeout_sec": 0, 00:25:21.272 "generate_uuids": false, 00:25:21.272 "high_priority_weight": 0, 00:25:21.272 "io_path_stat": false, 00:25:21.272 "io_queue_requests": 512, 00:25:21.272 "keep_alive_timeout_ms": 10000, 00:25:21.272 "low_priority_weight": 0, 00:25:21.272 "medium_priority_weight": 0, 00:25:21.272 "nvme_adminq_poll_period_us": 10000, 00:25:21.272 "nvme_error_stat": false, 00:25:21.272 "nvme_ioq_poll_period_us": 0, 00:25:21.272 "rdma_cm_event_timeout_ms": 0, 00:25:21.272 "rdma_max_cq_size": 0, 00:25:21.272 "rdma_srq_size": 0, 00:25:21.272 "reconnect_delay_sec": 0, 00:25:21.272 "timeout_admin_us": 0, 00:25:21.272 "timeout_us": 0, 00:25:21.272 "transport_ack_timeout": 0, 00:25:21.272 "transport_retry_count": 4, 00:25:21.272 "transport_tos": 0 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_nvme_attach_controller", 00:25:21.272 "params": { 00:25:21.272 "adrfam": "IPv4", 00:25:21.272 "ctrlr_loss_timeout_sec": 0, 00:25:21.272 "ddgst": false, 00:25:21.272 "fast_io_fail_timeout_sec": 0, 00:25:21.272 "hdgst": false, 00:25:21.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.272 "name": "nvme0", 00:25:21.272 "prchk_guard": false, 00:25:21.272 "prchk_reftag": false, 00:25:21.272 "psk": "key0", 00:25:21.272 "reconnect_delay_sec": 0, 00:25:21.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.272 "traddr": "10.0.0.2", 00:25:21.272 "trsvcid": "4420", 00:25:21.272 "trtype": "TCP" 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_nvme_set_hotplug", 00:25:21.272 "params": { 00:25:21.272 "enable": false, 00:25:21.272 "period_us": 100000 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_enable_histogram", 00:25:21.272 "params": { 00:25:21.272 "enable": true, 00:25:21.272 "name": "nvme0n1" 00:25:21.272 } 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "method": "bdev_wait_for_examine" 00:25:21.272 } 00:25:21.272 ] 00:25:21.272 }, 00:25:21.272 { 00:25:21.272 "subsystem": "nbd", 00:25:21.272 "config": [] 00:25:21.272 } 00:25:21.272 ] 00:25:21.272 }' 00:25:21.530 [2024-04-18 11:16:29.546251] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:21.530 [2024-04-18 11:16:29.546440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80815 ] 00:25:21.530 [2024-04-18 11:16:29.722334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.787 [2024-04-18 11:16:30.003506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.352 [2024-04-18 11:16:30.395177] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:22.352 11:16:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.352 11:16:30 -- common/autotest_common.sh@850 -- # return 0 00:25:22.352 11:16:30 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.352 11:16:30 -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:22.611 11:16:30 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.869 11:16:30 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.869 Running I/O for 1 seconds... 00:25:23.803 00:25:23.803 Latency(us) 00:25:23.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.803 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:23.803 Verification LBA range: start 0x0 length 0x2000 00:25:23.803 nvme0n1 : 1.02 2784.02 10.88 0.00 0.00 45384.79 8936.73 38606.66 00:25:23.803 =================================================================================================================== 00:25:23.803 Total : 2784.02 10.88 0.00 0.00 45384.79 8936.73 38606.66 00:25:23.803 0 00:25:23.803 11:16:32 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:23.803 11:16:32 -- target/tls.sh@279 -- # cleanup 00:25:23.803 11:16:32 -- target/tls.sh@15 -- # process_shm --id 0 00:25:23.803 11:16:32 -- common/autotest_common.sh@794 -- # type=--id 00:25:23.803 11:16:32 -- common/autotest_common.sh@795 -- # id=0 00:25:23.803 11:16:32 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:23.803 11:16:32 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:23.803 11:16:32 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:23.803 11:16:32 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:23.803 11:16:32 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:23.803 11:16:32 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:23.803 nvmf_trace.0 00:25:24.061 11:16:32 -- common/autotest_common.sh@809 -- # return 0 00:25:24.061 11:16:32 -- target/tls.sh@16 -- # killprocess 80815 00:25:24.061 11:16:32 -- common/autotest_common.sh@936 -- # '[' -z 80815 ']' 00:25:24.061 11:16:32 -- common/autotest_common.sh@940 -- # kill -0 80815 00:25:24.061 11:16:32 -- common/autotest_common.sh@941 -- # uname 00:25:24.061 11:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.061 11:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80815 00:25:24.061 11:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:24.061 11:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:24.061 killing process with pid 80815 00:25:24.061 11:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80815' 00:25:24.061 Received shutdown signal, test time was about 1.000000 seconds 00:25:24.061 00:25:24.061 Latency(us) 00:25:24.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.061 =================================================================================================================== 00:25:24.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.061 11:16:32 -- common/autotest_common.sh@955 -- # kill 80815 00:25:24.061 11:16:32 -- common/autotest_common.sh@960 -- # wait 80815 00:25:25.439 11:16:33 -- target/tls.sh@17 -- # nvmftestfini 00:25:25.439 11:16:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:25.439 11:16:33 -- nvmf/common.sh@117 -- # sync 00:25:25.439 11:16:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.439 11:16:33 -- nvmf/common.sh@120 -- # set +e 00:25:25.439 11:16:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.439 11:16:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.439 rmmod nvme_tcp 00:25:25.439 rmmod nvme_fabrics 00:25:25.439 rmmod nvme_keyring 00:25:25.439 11:16:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.439 11:16:33 -- nvmf/common.sh@124 -- # set -e 00:25:25.439 11:16:33 -- nvmf/common.sh@125 -- # return 0 00:25:25.439 11:16:33 -- nvmf/common.sh@478 -- # '[' -n 80768 ']' 00:25:25.439 11:16:33 -- nvmf/common.sh@479 -- # killprocess 80768 00:25:25.439 11:16:33 -- common/autotest_common.sh@936 -- # '[' -z 80768 ']' 00:25:25.439 11:16:33 -- common/autotest_common.sh@940 -- # kill -0 80768 00:25:25.439 11:16:33 -- common/autotest_common.sh@941 -- # uname 00:25:25.439 11:16:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.439 11:16:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80768 00:25:25.439 11:16:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:25.439 11:16:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:25.439 killing process with pid 80768 00:25:25.439 11:16:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80768' 00:25:25.439 11:16:33 -- common/autotest_common.sh@955 -- # kill 80768 00:25:25.439 11:16:33 -- common/autotest_common.sh@960 -- # wait 80768 00:25:26.832 11:16:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:26.832 11:16:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:26.832 11:16:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:26.832 11:16:34 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.832 11:16:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.832 11:16:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.832 11:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.832 11:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.832 11:16:34 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:26.832 11:16:34 -- target/tls.sh@18 -- # rm -f /tmp/tmp.itFYbR4acc /tmp/tmp.7JNyQs2MoJ /tmp/tmp.mufboOTJKP 00:25:26.832 00:25:26.832 real 1m48.425s 00:25:26.832 user 2m53.581s 00:25:26.832 sys 0m28.555s 00:25:26.832 11:16:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:26.832 11:16:34 -- common/autotest_common.sh@10 -- # set +x 00:25:26.832 ************************************ 00:25:26.832 END TEST nvmf_tls 00:25:26.832 ************************************ 00:25:26.832 11:16:34 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:26.832 11:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:26.832 11:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.832 11:16:34 -- common/autotest_common.sh@10 -- # set +x 00:25:26.832 ************************************ 00:25:26.832 START TEST nvmf_fips 00:25:26.832 ************************************ 00:25:26.832 11:16:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:26.832 * Looking for test storage... 00:25:26.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:26.832 11:16:34 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:26.832 11:16:34 -- nvmf/common.sh@7 -- # uname -s 00:25:26.832 11:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.832 11:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.832 11:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.832 11:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.832 11:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.832 11:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.832 11:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.832 11:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.832 11:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.832 11:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.832 11:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:26.832 11:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:26.832 11:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.832 11:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.832 11:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:26.832 11:16:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.832 11:16:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:26.832 11:16:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.832 11:16:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.832 11:16:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.832 11:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.832 11:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.832 11:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.832 11:16:34 -- paths/export.sh@5 -- # export PATH 00:25:26.832 11:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.832 11:16:34 -- nvmf/common.sh@47 -- # : 0 00:25:26.832 11:16:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.832 11:16:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.832 11:16:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.832 11:16:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.832 11:16:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.832 11:16:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.832 11:16:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.832 11:16:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.832 11:16:34 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.832 11:16:34 -- fips/fips.sh@89 -- # check_openssl_version 00:25:26.832 11:16:34 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:26.832 11:16:34 -- fips/fips.sh@85 -- # openssl version 00:25:26.832 11:16:34 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:26.832 11:16:34 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:26.832 11:16:34 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:26.832 11:16:34 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:26.832 11:16:34 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:26.832 11:16:34 -- scripts/common.sh@333 -- # IFS=.-: 00:25:26.832 11:16:34 -- scripts/common.sh@333 -- # read -ra ver1 00:25:26.832 11:16:34 -- scripts/common.sh@334 -- # IFS=.-: 00:25:26.832 11:16:34 -- scripts/common.sh@334 -- # read -ra ver2 00:25:26.832 11:16:34 -- scripts/common.sh@335 -- # local 'op=>=' 00:25:26.832 11:16:34 -- scripts/common.sh@337 -- # ver1_l=3 00:25:26.832 11:16:34 -- scripts/common.sh@338 -- # ver2_l=3 00:25:26.832 11:16:34 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:26.832 11:16:34 -- scripts/common.sh@341 -- # case "$op" in 00:25:26.832 11:16:34 -- scripts/common.sh@345 -- # : 1 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # decimal 3 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=3 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 3 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # ver1[v]=3 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # decimal 3 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=3 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 3 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # ver2[v]=3 00:25:26.832 11:16:34 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.832 11:16:34 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v++ )) 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # decimal 0 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=0 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 0 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # ver1[v]=0 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # decimal 0 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=0 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 0 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:26.832 11:16:34 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.832 11:16:34 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v++ )) 00:25:26.832 11:16:34 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # decimal 9 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=9 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 9 00:25:26.832 11:16:34 -- scripts/common.sh@362 -- # ver1[v]=9 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # decimal 0 00:25:26.832 11:16:34 -- scripts/common.sh@350 -- # local d=0 00:25:26.832 11:16:34 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:26.832 11:16:34 -- scripts/common.sh@352 -- # echo 0 00:25:26.832 11:16:34 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:26.832 11:16:34 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:26.832 11:16:34 -- scripts/common.sh@364 -- # return 0 00:25:26.832 11:16:34 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:26.833 11:16:34 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:26.833 11:16:34 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:26.833 11:16:35 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:26.833 11:16:35 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:26.833 11:16:35 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:26.833 11:16:35 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:26.833 11:16:35 -- fips/fips.sh@113 -- # build_openssl_config 00:25:26.833 11:16:35 -- fips/fips.sh@37 -- # cat 00:25:26.833 11:16:35 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:26.833 11:16:35 -- fips/fips.sh@58 -- # cat - 00:25:26.833 11:16:35 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:26.833 11:16:35 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:26.833 11:16:35 -- fips/fips.sh@116 -- # mapfile -t providers 00:25:26.833 11:16:35 -- fips/fips.sh@116 -- # openssl list -providers 00:25:26.833 11:16:35 -- fips/fips.sh@116 -- # grep name 00:25:27.091 11:16:35 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:27.091 11:16:35 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:27.091 11:16:35 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:27.091 11:16:35 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:27.091 11:16:35 -- fips/fips.sh@127 -- # : 00:25:27.091 11:16:35 -- common/autotest_common.sh@638 -- # local es=0 00:25:27.091 11:16:35 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:27.091 11:16:35 -- common/autotest_common.sh@626 -- # local arg=openssl 00:25:27.091 11:16:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:27.091 11:16:35 -- common/autotest_common.sh@630 -- # type -t openssl 00:25:27.091 11:16:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:27.091 11:16:35 -- common/autotest_common.sh@632 -- # type -P openssl 00:25:27.091 11:16:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:27.091 11:16:35 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:25:27.091 11:16:35 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:25:27.091 11:16:35 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:25:27.091 Error setting digest 00:25:27.091 0052CA3D4B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:27.091 0052CA3D4B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:27.091 11:16:35 -- common/autotest_common.sh@641 -- # es=1 00:25:27.091 11:16:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:27.091 11:16:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:27.091 11:16:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:27.091 11:16:35 -- fips/fips.sh@130 -- # nvmftestinit 00:25:27.091 11:16:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:27.091 11:16:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.091 11:16:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:27.091 11:16:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:27.091 11:16:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:27.091 11:16:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.091 11:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.091 11:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.091 11:16:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:27.091 11:16:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:27.091 11:16:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:27.091 11:16:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:27.091 11:16:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:27.091 11:16:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:27.091 11:16:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.091 11:16:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.091 11:16:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:27.091 11:16:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:27.091 11:16:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:27.091 11:16:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:27.091 11:16:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:27.091 11:16:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.091 11:16:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:27.091 11:16:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:27.091 11:16:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:27.091 11:16:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:27.091 11:16:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:27.091 11:16:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:27.091 Cannot find device "nvmf_tgt_br" 00:25:27.091 11:16:35 -- nvmf/common.sh@155 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:27.091 Cannot find device "nvmf_tgt_br2" 00:25:27.091 11:16:35 -- nvmf/common.sh@156 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:27.091 11:16:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:27.091 Cannot find device "nvmf_tgt_br" 00:25:27.091 11:16:35 -- nvmf/common.sh@158 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:27.091 Cannot find device "nvmf_tgt_br2" 00:25:27.091 11:16:35 -- nvmf/common.sh@159 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:27.091 11:16:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:27.091 11:16:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:27.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.091 11:16:35 -- nvmf/common.sh@162 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:27.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.091 11:16:35 -- nvmf/common.sh@163 -- # true 00:25:27.091 11:16:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:27.091 11:16:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:27.091 11:16:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:27.091 11:16:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:27.091 11:16:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:27.350 11:16:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:27.350 11:16:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:27.350 11:16:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:27.350 11:16:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:27.350 11:16:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:27.350 11:16:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:27.350 11:16:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:27.350 11:16:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:27.350 11:16:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:27.350 11:16:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:27.350 11:16:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:27.350 11:16:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:27.350 11:16:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:27.350 11:16:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:27.350 11:16:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:27.350 11:16:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:27.350 11:16:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:27.350 11:16:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:27.350 11:16:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:27.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:25:27.350 00:25:27.350 --- 10.0.0.2 ping statistics --- 00:25:27.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.350 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:27.350 11:16:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:27.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:27.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:27.350 00:25:27.350 --- 10.0.0.3 ping statistics --- 00:25:27.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.350 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:27.350 11:16:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:27.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:27.350 00:25:27.350 --- 10.0.0.1 ping statistics --- 00:25:27.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.350 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:27.350 11:16:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.350 11:16:35 -- nvmf/common.sh@422 -- # return 0 00:25:27.350 11:16:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:27.350 11:16:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.350 11:16:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:27.350 11:16:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:27.350 11:16:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.350 11:16:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:27.350 11:16:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:27.350 11:16:35 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:27.350 11:16:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:27.350 11:16:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:27.350 11:16:35 -- common/autotest_common.sh@10 -- # set +x 00:25:27.350 11:16:35 -- nvmf/common.sh@470 -- # nvmfpid=81127 00:25:27.350 11:16:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.350 11:16:35 -- nvmf/common.sh@471 -- # waitforlisten 81127 00:25:27.350 11:16:35 -- common/autotest_common.sh@817 -- # '[' -z 81127 ']' 00:25:27.350 11:16:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.350 11:16:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:27.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.350 11:16:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.350 11:16:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:27.350 11:16:35 -- common/autotest_common.sh@10 -- # set +x 00:25:27.609 [2024-04-18 11:16:35.646787] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:27.609 [2024-04-18 11:16:35.646950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.609 [2024-04-18 11:16:35.821273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.867 [2024-04-18 11:16:36.063980] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.867 [2024-04-18 11:16:36.064064] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.868 [2024-04-18 11:16:36.064105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.868 [2024-04-18 11:16:36.064148] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.868 [2024-04-18 11:16:36.064165] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.868 [2024-04-18 11:16:36.064203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.434 11:16:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:28.434 11:16:36 -- common/autotest_common.sh@850 -- # return 0 00:25:28.434 11:16:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:28.434 11:16:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:28.434 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:25:28.434 11:16:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.434 11:16:36 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:28.434 11:16:36 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:28.434 11:16:36 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:28.434 11:16:36 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:28.434 11:16:36 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:28.434 11:16:36 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:28.434 11:16:36 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:28.434 11:16:36 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.692 [2024-04-18 11:16:36.861173] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.692 [2024-04-18 11:16:36.877083] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.692 [2024-04-18 11:16:36.877369] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.950 [2024-04-18 11:16:36.933623] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:28.950 malloc0 00:25:28.950 11:16:36 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.950 11:16:36 -- fips/fips.sh@147 -- # bdevperf_pid=81179 00:25:28.950 11:16:36 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:28.950 11:16:36 -- fips/fips.sh@148 -- # waitforlisten 81179 /var/tmp/bdevperf.sock 00:25:28.950 11:16:36 -- common/autotest_common.sh@817 -- # '[' -z 81179 ']' 00:25:28.950 11:16:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.950 11:16:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.950 11:16:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.950 11:16:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.950 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:25:28.950 [2024-04-18 11:16:37.107564] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:28.950 [2024-04-18 11:16:37.107716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81179 ] 00:25:29.208 [2024-04-18 11:16:37.281258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.466 [2024-04-18 11:16:37.587312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.032 11:16:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.032 11:16:38 -- common/autotest_common.sh@850 -- # return 0 00:25:30.032 11:16:38 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:30.032 [2024-04-18 11:16:38.245732] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:30.032 [2024-04-18 11:16:38.245934] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:30.291 TLSTESTn1 00:25:30.291 11:16:38 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:30.291 Running I/O for 10 seconds... 00:25:42.493 00:25:42.494 Latency(us) 00:25:42.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.494 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:42.494 Verification LBA range: start 0x0 length 0x2000 00:25:42.494 TLSTESTn1 : 10.03 2631.28 10.28 0.00 0.00 48552.82 8340.95 44087.85 00:25:42.494 =================================================================================================================== 00:25:42.494 Total : 2631.28 10.28 0.00 0.00 48552.82 8340.95 44087.85 00:25:42.494 0 00:25:42.494 11:16:48 -- fips/fips.sh@1 -- # cleanup 00:25:42.494 11:16:48 -- fips/fips.sh@15 -- # process_shm --id 0 00:25:42.494 11:16:48 -- common/autotest_common.sh@794 -- # type=--id 00:25:42.494 11:16:48 -- common/autotest_common.sh@795 -- # id=0 00:25:42.494 11:16:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:42.494 11:16:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.494 11:16:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:42.494 11:16:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:42.494 11:16:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:42.494 11:16:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.494 nvmf_trace.0 00:25:42.494 11:16:48 -- common/autotest_common.sh@809 -- # return 0 00:25:42.494 11:16:48 -- fips/fips.sh@16 -- # killprocess 81179 00:25:42.494 11:16:48 -- common/autotest_common.sh@936 -- # '[' -z 81179 ']' 00:25:42.494 11:16:48 -- common/autotest_common.sh@940 -- # kill -0 81179 00:25:42.494 11:16:48 -- common/autotest_common.sh@941 -- # uname 00:25:42.494 11:16:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.494 11:16:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81179 00:25:42.494 11:16:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:42.494 11:16:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:42.494 killing process with pid 81179 00:25:42.494 11:16:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81179' 00:25:42.494 11:16:48 -- common/autotest_common.sh@955 -- # kill 81179 00:25:42.494 11:16:48 -- common/autotest_common.sh@960 -- # wait 81179 00:25:42.494 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.494 00:25:42.494 Latency(us) 00:25:42.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.494 =================================================================================================================== 00:25:42.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.494 [2024-04-18 11:16:48.663798] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:42.494 11:16:49 -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.494 11:16:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:42.494 11:16:49 -- nvmf/common.sh@117 -- # sync 00:25:42.494 11:16:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.494 11:16:49 -- nvmf/common.sh@120 -- # set +e 00:25:42.494 11:16:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.494 11:16:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.494 rmmod nvme_tcp 00:25:42.494 rmmod nvme_fabrics 00:25:42.494 rmmod nvme_keyring 00:25:42.494 11:16:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.494 11:16:50 -- nvmf/common.sh@124 -- # set -e 00:25:42.494 11:16:50 -- nvmf/common.sh@125 -- # return 0 00:25:42.494 11:16:50 -- nvmf/common.sh@478 -- # '[' -n 81127 ']' 00:25:42.494 11:16:50 -- nvmf/common.sh@479 -- # killprocess 81127 00:25:42.494 11:16:50 -- common/autotest_common.sh@936 -- # '[' -z 81127 ']' 00:25:42.494 11:16:50 -- common/autotest_common.sh@940 -- # kill -0 81127 00:25:42.494 11:16:50 -- common/autotest_common.sh@941 -- # uname 00:25:42.494 11:16:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.494 11:16:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81127 00:25:42.494 11:16:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:42.494 killing process with pid 81127 00:25:42.494 11:16:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:42.494 11:16:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81127' 00:25:42.494 11:16:50 -- common/autotest_common.sh@955 -- # kill 81127 00:25:42.494 [2024-04-18 11:16:50.064535] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:42.494 11:16:50 -- common/autotest_common.sh@960 -- # wait 81127 00:25:43.430 11:16:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:43.430 11:16:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:43.430 11:16:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:43.430 11:16:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.430 11:16:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.430 11:16:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.430 11:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.430 11:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.430 11:16:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:43.430 11:16:51 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:43.430 00:25:43.430 real 0m16.573s 00:25:43.430 user 0m23.647s 00:25:43.430 sys 0m5.431s 00:25:43.430 11:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:43.430 11:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 ************************************ 00:25:43.430 END TEST nvmf_fips 00:25:43.430 ************************************ 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:25:43.430 11:16:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:43.430 11:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:25:43.430 11:16:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:43.430 11:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:25:43.430 11:16:51 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:43.430 11:16:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:43.430 11:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.430 11:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 ************************************ 00:25:43.430 START TEST nvmf_multicontroller 00:25:43.430 ************************************ 00:25:43.430 11:16:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:43.430 * Looking for test storage... 00:25:43.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:43.430 11:16:51 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.430 11:16:51 -- nvmf/common.sh@7 -- # uname -s 00:25:43.430 11:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.430 11:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.430 11:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.430 11:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.430 11:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.430 11:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.430 11:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.430 11:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.430 11:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.430 11:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.689 11:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:43.689 11:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:43.689 11:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.689 11:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.689 11:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.689 11:16:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.689 11:16:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.689 11:16:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.689 11:16:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.689 11:16:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.689 11:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.689 11:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.689 11:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.689 11:16:51 -- paths/export.sh@5 -- # export PATH 00:25:43.689 11:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.689 11:16:51 -- nvmf/common.sh@47 -- # : 0 00:25:43.689 11:16:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.689 11:16:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.689 11:16:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.689 11:16:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.689 11:16:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.689 11:16:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.689 11:16:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.689 11:16:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.689 11:16:51 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:43.689 11:16:51 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:43.689 11:16:51 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:43.689 11:16:51 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:43.689 11:16:51 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.689 11:16:51 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:43.689 11:16:51 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:43.689 11:16:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:43.689 11:16:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.689 11:16:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:43.689 11:16:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:43.689 11:16:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:43.689 11:16:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.689 11:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.689 11:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.689 11:16:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:43.689 11:16:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:43.689 11:16:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:43.689 11:16:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:43.689 11:16:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:43.689 11:16:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:43.689 11:16:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.690 11:16:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.690 11:16:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:43.690 11:16:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:43.690 11:16:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.690 11:16:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.690 11:16:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.690 11:16:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.690 11:16:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.690 11:16:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.690 11:16:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.690 11:16:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.690 11:16:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:43.690 11:16:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:43.690 Cannot find device "nvmf_tgt_br" 00:25:43.690 11:16:51 -- nvmf/common.sh@155 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.690 Cannot find device "nvmf_tgt_br2" 00:25:43.690 11:16:51 -- nvmf/common.sh@156 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:43.690 11:16:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:43.690 Cannot find device "nvmf_tgt_br" 00:25:43.690 11:16:51 -- nvmf/common.sh@158 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:43.690 Cannot find device "nvmf_tgt_br2" 00:25:43.690 11:16:51 -- nvmf/common.sh@159 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:43.690 11:16:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:43.690 11:16:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.690 11:16:51 -- nvmf/common.sh@162 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.690 11:16:51 -- nvmf/common.sh@163 -- # true 00:25:43.690 11:16:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.690 11:16:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.690 11:16:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.690 11:16:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:43.690 11:16:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:43.690 11:16:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:43.690 11:16:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:43.690 11:16:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:43.690 11:16:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:43.690 11:16:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:43.690 11:16:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:43.949 11:16:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:43.949 11:16:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:43.949 11:16:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:43.949 11:16:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:43.949 11:16:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:43.949 11:16:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:43.949 11:16:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:43.949 11:16:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:43.949 11:16:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:43.949 11:16:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:43.949 11:16:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:43.949 11:16:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:43.949 11:16:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:43.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:25:43.949 00:25:43.949 --- 10.0.0.2 ping statistics --- 00:25:43.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.949 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:43.949 11:16:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:43.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:25:43.949 00:25:43.949 --- 10.0.0.3 ping statistics --- 00:25:43.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.949 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:43.949 11:16:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:43.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:43.949 00:25:43.949 --- 10.0.0.1 ping statistics --- 00:25:43.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.949 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:43.949 11:16:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.949 11:16:52 -- nvmf/common.sh@422 -- # return 0 00:25:43.949 11:16:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:43.949 11:16:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.949 11:16:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:43.949 11:16:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:43.949 11:16:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.949 11:16:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:43.949 11:16:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:43.949 11:16:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:43.949 11:16:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:43.949 11:16:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:43.949 11:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:43.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.949 11:16:52 -- nvmf/common.sh@470 -- # nvmfpid=81577 00:25:43.949 11:16:52 -- nvmf/common.sh@471 -- # waitforlisten 81577 00:25:43.949 11:16:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:43.949 11:16:52 -- common/autotest_common.sh@817 -- # '[' -z 81577 ']' 00:25:43.949 11:16:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.949 11:16:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:43.949 11:16:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.949 11:16:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:43.949 11:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:44.207 [2024-04-18 11:16:52.234593] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:44.207 [2024-04-18 11:16:52.234771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.207 [2024-04-18 11:16:52.410961] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:44.774 [2024-04-18 11:16:52.703540] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.774 [2024-04-18 11:16:52.703966] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.774 [2024-04-18 11:16:52.704186] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.774 [2024-04-18 11:16:52.704404] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.774 [2024-04-18 11:16:52.704466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.774 [2024-04-18 11:16:52.704862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.774 [2024-04-18 11:16:52.704996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.774 [2024-04-18 11:16:52.705009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.032 11:16:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:45.032 11:16:53 -- common/autotest_common.sh@850 -- # return 0 00:25:45.032 11:16:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:45.032 11:16:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:45.032 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 11:16:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.290 11:16:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.290 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.290 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 [2024-04-18 11:16:53.293001] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.290 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.290 11:16:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.290 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.290 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 Malloc0 00:25:45.290 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.290 11:16:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.290 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.290 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.290 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.290 11:16:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.290 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.290 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.291 11:16:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.291 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.291 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 [2024-04-18 11:16:53.412563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.291 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.291 11:16:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:45.291 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.291 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 [2024-04-18 11:16:53.420485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:45.291 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.291 11:16:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:45.291 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.291 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.291 Malloc1 00:25:45.291 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.291 11:16:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:45.291 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.291 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.605 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.605 11:16:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:45.605 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.605 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.605 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.605 11:16:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:45.605 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.605 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.605 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.605 11:16:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:45.605 11:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.605 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:45.605 11:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.605 11:16:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=81635 00:25:45.605 11:16:53 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:45.605 11:16:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.605 11:16:53 -- host/multicontroller.sh@47 -- # waitforlisten 81635 /var/tmp/bdevperf.sock 00:25:45.605 11:16:53 -- common/autotest_common.sh@817 -- # '[' -z 81635 ']' 00:25:45.605 11:16:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.605 11:16:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.605 11:16:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.605 11:16:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.605 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:46.565 11:16:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.565 11:16:54 -- common/autotest_common.sh@850 -- # return 0 00:25:46.565 11:16:54 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:46.565 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.565 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.565 NVMe0n1 00:25:46.565 11:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.565 11:16:54 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.565 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.565 11:16:54 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:46.565 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.565 11:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.565 1 00:25:46.565 11:16:54 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:46.565 11:16:54 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.565 11:16:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:46.565 11:16:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.565 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.565 11:16:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.565 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.565 11:16:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:46.565 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.565 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.824 2024/04/18 11:16:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:46.824 request: 00:25:46.824 { 00:25:46.824 "method": "bdev_nvme_attach_controller", 00:25:46.824 "params": { 00:25:46.824 "name": "NVMe0", 00:25:46.824 "trtype": "tcp", 00:25:46.824 "traddr": "10.0.0.2", 00:25:46.824 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:46.824 "hostaddr": "10.0.0.2", 00:25:46.824 "hostsvcid": "60000", 00:25:46.824 "adrfam": "ipv4", 00:25:46.824 "trsvcid": "4420", 00:25:46.824 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:25:46.824 } 00:25:46.824 } 00:25:46.824 Got JSON-RPC error response 00:25:46.824 GoRPCClient: error on JSON-RPC call 00:25:46.824 11:16:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.824 11:16:54 -- common/autotest_common.sh@641 -- # es=1 00:25:46.824 11:16:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.824 11:16:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.824 11:16:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.824 11:16:54 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:46.824 11:16:54 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.824 11:16:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:46.824 11:16:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.824 11:16:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:46.824 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.824 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.824 2024/04/18 11:16:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:46.824 request: 00:25:46.824 { 00:25:46.824 "method": "bdev_nvme_attach_controller", 00:25:46.824 "params": { 00:25:46.824 "name": "NVMe0", 00:25:46.824 "trtype": "tcp", 00:25:46.824 "traddr": "10.0.0.2", 00:25:46.824 "hostaddr": "10.0.0.2", 00:25:46.824 "hostsvcid": "60000", 00:25:46.824 "adrfam": "ipv4", 00:25:46.824 "trsvcid": "4420", 00:25:46.824 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:25:46.824 } 00:25:46.824 } 00:25:46.824 Got JSON-RPC error response 00:25:46.824 GoRPCClient: error on JSON-RPC call 00:25:46.824 11:16:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.824 11:16:54 -- common/autotest_common.sh@641 -- # es=1 00:25:46.824 11:16:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.824 11:16:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.824 11:16:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.824 11:16:54 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:46.824 11:16:54 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.824 11:16:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:46.824 11:16:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.824 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.824 11:16:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:46.824 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.824 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.824 2024/04/18 11:16:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:25:46.824 request: 00:25:46.824 { 00:25:46.824 "method": "bdev_nvme_attach_controller", 00:25:46.824 "params": { 00:25:46.824 "name": "NVMe0", 00:25:46.824 "trtype": "tcp", 00:25:46.824 "traddr": "10.0.0.2", 00:25:46.824 "hostaddr": "10.0.0.2", 00:25:46.824 "hostsvcid": "60000", 00:25:46.824 "adrfam": "ipv4", 00:25:46.824 "trsvcid": "4420", 00:25:46.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.824 "multipath": "disable" 00:25:46.824 } 00:25:46.824 } 00:25:46.824 Got JSON-RPC error response 00:25:46.824 GoRPCClient: error on JSON-RPC call 00:25:46.824 11:16:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.824 11:16:54 -- common/autotest_common.sh@641 -- # es=1 00:25:46.824 11:16:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.824 11:16:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.825 11:16:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.825 11:16:54 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:46.825 11:16:54 -- common/autotest_common.sh@638 -- # local es=0 00:25:46.825 11:16:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:46.825 11:16:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:46.825 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.825 11:16:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:46.825 11:16:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:46.825 11:16:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:46.825 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.825 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.825 2024/04/18 11:16:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:25:46.825 request: 00:25:46.825 { 00:25:46.825 "method": "bdev_nvme_attach_controller", 00:25:46.825 "params": { 00:25:46.825 "name": "NVMe0", 00:25:46.825 "trtype": "tcp", 00:25:46.825 "traddr": "10.0.0.2", 00:25:46.825 "hostaddr": "10.0.0.2", 00:25:46.825 "hostsvcid": "60000", 00:25:46.825 "adrfam": "ipv4", 00:25:46.825 "trsvcid": "4420", 00:25:46.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.825 "multipath": "failover" 00:25:46.825 } 00:25:46.825 } 00:25:46.825 Got JSON-RPC error response 00:25:46.825 GoRPCClient: error on JSON-RPC call 00:25:46.825 11:16:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:46.825 11:16:54 -- common/autotest_common.sh@641 -- # es=1 00:25:46.825 11:16:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:46.825 11:16:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:46.825 11:16:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:46.825 11:16:54 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.825 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.825 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.825 00:25:46.825 11:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.825 11:16:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.825 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.825 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.825 11:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.825 11:16:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:46.825 11:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.825 11:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:46.825 00:25:46.825 11:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.825 11:16:55 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.825 11:16:55 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:46.825 11:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.825 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:46.825 11:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.825 11:16:55 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:46.825 11:16:55 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:48.200 0 00:25:48.200 11:16:56 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:48.200 11:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.200 11:16:56 -- common/autotest_common.sh@10 -- # set +x 00:25:48.200 11:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.200 11:16:56 -- host/multicontroller.sh@100 -- # killprocess 81635 00:25:48.200 11:16:56 -- common/autotest_common.sh@936 -- # '[' -z 81635 ']' 00:25:48.200 11:16:56 -- common/autotest_common.sh@940 -- # kill -0 81635 00:25:48.200 11:16:56 -- common/autotest_common.sh@941 -- # uname 00:25:48.200 11:16:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.200 11:16:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81635 00:25:48.200 11:16:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.200 11:16:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.200 11:16:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81635' 00:25:48.200 killing process with pid 81635 00:25:48.200 11:16:56 -- common/autotest_common.sh@955 -- # kill 81635 00:25:48.200 11:16:56 -- common/autotest_common.sh@960 -- # wait 81635 00:25:49.577 11:16:57 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.577 11:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.577 11:16:57 -- common/autotest_common.sh@10 -- # set +x 00:25:49.577 11:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.577 11:16:57 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:49.577 11:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.577 11:16:57 -- common/autotest_common.sh@10 -- # set +x 00:25:49.577 11:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.577 11:16:57 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:49.577 11:16:57 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:49.577 11:16:57 -- common/autotest_common.sh@1598 -- # read -r file 00:25:49.577 11:16:57 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:25:49.577 11:16:57 -- common/autotest_common.sh@1597 -- # sort -u 00:25:49.577 11:16:57 -- common/autotest_common.sh@1599 -- # cat 00:25:49.577 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:25:49.577 [2024-04-18 11:16:53.646955] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:49.577 [2024-04-18 11:16:53.647172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81635 ] 00:25:49.577 [2024-04-18 11:16:53.821022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.577 [2024-04-18 11:16:54.144387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.577 [2024-04-18 11:16:55.010799] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 6018078b-aa9e-4fcf-8386-1a0f9eda1e98 already exists 00:25:49.577 [2024-04-18 11:16:55.010895] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:6018078b-aa9e-4fcf-8386-1a0f9eda1e98 alias for bdev NVMe1n1 00:25:49.577 [2024-04-18 11:16:55.010943] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:49.577 Running I/O for 1 seconds... 00:25:49.577 00:25:49.577 Latency(us) 00:25:49.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.577 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:49.577 NVMe0n1 : 1.00 13702.11 53.52 0.00 0.00 9325.39 3530.01 16562.73 00:25:49.577 =================================================================================================================== 00:25:49.577 Total : 13702.11 53.52 0.00 0.00 9325.39 3530.01 16562.73 00:25:49.577 Received shutdown signal, test time was about 1.000000 seconds 00:25:49.577 00:25:49.577 Latency(us) 00:25:49.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.577 =================================================================================================================== 00:25:49.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.577 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:25:49.577 11:16:57 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:49.577 11:16:57 -- common/autotest_common.sh@1598 -- # read -r file 00:25:49.577 11:16:57 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:49.577 11:16:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:49.577 11:16:57 -- nvmf/common.sh@117 -- # sync 00:25:49.577 11:16:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.577 11:16:57 -- nvmf/common.sh@120 -- # set +e 00:25:49.577 11:16:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.577 11:16:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.577 rmmod nvme_tcp 00:25:49.577 rmmod nvme_fabrics 00:25:49.577 rmmod nvme_keyring 00:25:49.577 11:16:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.577 11:16:57 -- nvmf/common.sh@124 -- # set -e 00:25:49.577 11:16:57 -- nvmf/common.sh@125 -- # return 0 00:25:49.577 11:16:57 -- nvmf/common.sh@478 -- # '[' -n 81577 ']' 00:25:49.577 11:16:57 -- nvmf/common.sh@479 -- # killprocess 81577 00:25:49.578 11:16:57 -- common/autotest_common.sh@936 -- # '[' -z 81577 ']' 00:25:49.578 11:16:57 -- common/autotest_common.sh@940 -- # kill -0 81577 00:25:49.578 11:16:57 -- common/autotest_common.sh@941 -- # uname 00:25:49.578 11:16:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.578 11:16:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81577 00:25:49.578 11:16:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.578 11:16:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.578 killing process with pid 81577 00:25:49.578 11:16:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81577' 00:25:49.578 11:16:57 -- common/autotest_common.sh@955 -- # kill 81577 00:25:49.578 11:16:57 -- common/autotest_common.sh@960 -- # wait 81577 00:25:50.952 11:16:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:50.952 11:16:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:50.952 11:16:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:50.952 11:16:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.952 11:16:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.952 11:16:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.952 11:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.952 11:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.952 11:16:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:50.952 00:25:50.952 real 0m7.544s 00:25:50.952 user 0m22.621s 00:25:50.952 sys 0m1.444s 00:25:50.952 11:16:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:50.952 ************************************ 00:25:50.952 END TEST nvmf_multicontroller 00:25:50.952 ************************************ 00:25:50.952 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:25:50.952 11:16:59 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:50.952 11:16:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:50.952 11:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.952 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:25:51.211 ************************************ 00:25:51.211 START TEST nvmf_aer 00:25:51.211 ************************************ 00:25:51.211 11:16:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:51.211 * Looking for test storage... 00:25:51.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:51.211 11:16:59 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:51.211 11:16:59 -- nvmf/common.sh@7 -- # uname -s 00:25:51.211 11:16:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.211 11:16:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.211 11:16:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.211 11:16:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.211 11:16:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.211 11:16:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.211 11:16:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.211 11:16:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.211 11:16:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.211 11:16:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:51.211 11:16:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:51.211 11:16:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.211 11:16:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.211 11:16:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:51.211 11:16:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.211 11:16:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.211 11:16:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.211 11:16:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.211 11:16:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.211 11:16:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.211 11:16:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.211 11:16:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.211 11:16:59 -- paths/export.sh@5 -- # export PATH 00:25:51.211 11:16:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.211 11:16:59 -- nvmf/common.sh@47 -- # : 0 00:25:51.211 11:16:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.211 11:16:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.211 11:16:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.211 11:16:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.211 11:16:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.211 11:16:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.211 11:16:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.211 11:16:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.211 11:16:59 -- host/aer.sh@11 -- # nvmftestinit 00:25:51.211 11:16:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:51.211 11:16:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.211 11:16:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:51.211 11:16:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:51.211 11:16:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:51.211 11:16:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.211 11:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.211 11:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.211 11:16:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:51.211 11:16:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:51.211 11:16:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.211 11:16:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.211 11:16:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:51.211 11:16:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:51.211 11:16:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:51.211 11:16:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:51.211 11:16:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:51.211 11:16:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.211 11:16:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:51.211 11:16:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:51.211 11:16:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:51.211 11:16:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:51.211 11:16:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:51.211 11:16:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:51.211 Cannot find device "nvmf_tgt_br" 00:25:51.211 11:16:59 -- nvmf/common.sh@155 -- # true 00:25:51.211 11:16:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:51.211 Cannot find device "nvmf_tgt_br2" 00:25:51.211 11:16:59 -- nvmf/common.sh@156 -- # true 00:25:51.211 11:16:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:51.211 11:16:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:51.211 Cannot find device "nvmf_tgt_br" 00:25:51.211 11:16:59 -- nvmf/common.sh@158 -- # true 00:25:51.211 11:16:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:51.211 Cannot find device "nvmf_tgt_br2" 00:25:51.211 11:16:59 -- nvmf/common.sh@159 -- # true 00:25:51.211 11:16:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:51.469 11:16:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:51.469 11:16:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:51.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.469 11:16:59 -- nvmf/common.sh@162 -- # true 00:25:51.469 11:16:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:51.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.469 11:16:59 -- nvmf/common.sh@163 -- # true 00:25:51.470 11:16:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:51.470 11:16:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:51.470 11:16:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:51.470 11:16:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:51.470 11:16:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:51.470 11:16:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:51.470 11:16:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:51.470 11:16:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:51.470 11:16:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:51.470 11:16:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:51.470 11:16:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:51.470 11:16:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:51.470 11:16:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:51.470 11:16:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:51.470 11:16:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:51.470 11:16:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:51.470 11:16:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:51.470 11:16:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:51.470 11:16:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:51.470 11:16:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:51.470 11:16:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:51.470 11:16:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:51.470 11:16:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:51.470 11:16:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:51.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:25:51.470 00:25:51.470 --- 10.0.0.2 ping statistics --- 00:25:51.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.470 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:51.470 11:16:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:51.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:51.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:25:51.470 00:25:51.470 --- 10.0.0.3 ping statistics --- 00:25:51.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.470 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:51.470 11:16:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:51.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:51.470 00:25:51.470 --- 10.0.0.1 ping statistics --- 00:25:51.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.470 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:51.470 11:16:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.470 11:16:59 -- nvmf/common.sh@422 -- # return 0 00:25:51.470 11:16:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:51.470 11:16:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.470 11:16:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:51.470 11:16:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:51.470 11:16:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.470 11:16:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:51.470 11:16:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:51.727 11:16:59 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:51.727 11:16:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:51.727 11:16:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:51.727 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:25:51.727 11:16:59 -- nvmf/common.sh@470 -- # nvmfpid=81914 00:25:51.727 11:16:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:51.727 11:16:59 -- nvmf/common.sh@471 -- # waitforlisten 81914 00:25:51.727 11:16:59 -- common/autotest_common.sh@817 -- # '[' -z 81914 ']' 00:25:51.727 11:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.727 11:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:51.727 11:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.727 11:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:51.727 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:25:51.727 [2024-04-18 11:16:59.841928] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:51.727 [2024-04-18 11:16:59.842136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.985 [2024-04-18 11:17:00.021365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:52.242 [2024-04-18 11:17:00.297020] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.242 [2024-04-18 11:17:00.297312] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.242 [2024-04-18 11:17:00.297479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.242 [2024-04-18 11:17:00.297626] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.242 [2024-04-18 11:17:00.297686] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.242 [2024-04-18 11:17:00.297950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.242 [2024-04-18 11:17:00.298303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.242 [2024-04-18 11:17:00.298309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.242 [2024-04-18 11:17:00.299008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.807 11:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.807 11:17:00 -- common/autotest_common.sh@850 -- # return 0 00:25:52.807 11:17:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:52.807 11:17:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 11:17:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.807 11:17:00 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 [2024-04-18 11:17:00.839959] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 Malloc0 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 [2024-04-18 11:17:00.967915] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:52.807 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.807 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.807 [2024-04-18 11:17:00.975575] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:52.807 [ 00:25:52.807 { 00:25:52.807 "allow_any_host": true, 00:25:52.807 "hosts": [], 00:25:52.807 "listen_addresses": [], 00:25:52.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:52.807 "subtype": "Discovery" 00:25:52.807 }, 00:25:52.807 { 00:25:52.807 "allow_any_host": true, 00:25:52.807 "hosts": [], 00:25:52.807 "listen_addresses": [ 00:25:52.807 { 00:25:52.807 "adrfam": "IPv4", 00:25:52.807 "traddr": "10.0.0.2", 00:25:52.807 "transport": "TCP", 00:25:52.807 "trsvcid": "4420", 00:25:52.807 "trtype": "TCP" 00:25:52.807 } 00:25:52.807 ], 00:25:52.807 "max_cntlid": 65519, 00:25:52.807 "max_namespaces": 2, 00:25:52.807 "min_cntlid": 1, 00:25:52.807 "model_number": "SPDK bdev Controller", 00:25:52.807 "namespaces": [ 00:25:52.807 { 00:25:52.807 "bdev_name": "Malloc0", 00:25:52.807 "name": "Malloc0", 00:25:52.807 "nguid": "94F64E6799F74964B2B7CC668A81ECA1", 00:25:52.807 "nsid": 1, 00:25:52.807 "uuid": "94f64e67-99f7-4964-b2b7-cc668a81eca1" 00:25:52.807 } 00:25:52.807 ], 00:25:52.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.807 "serial_number": "SPDK00000000000001", 00:25:52.807 "subtype": "NVMe" 00:25:52.807 } 00:25:52.807 ] 00:25:52.807 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.807 11:17:00 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:52.807 11:17:00 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:52.807 11:17:00 -- host/aer.sh@33 -- # aerpid=81968 00:25:52.807 11:17:00 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:52.807 11:17:00 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:52.807 11:17:00 -- common/autotest_common.sh@1251 -- # local i=0 00:25:52.807 11:17:00 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:52.807 11:17:00 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:25:52.807 11:17:00 -- common/autotest_common.sh@1254 -- # i=1 00:25:52.807 11:17:00 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:53.083 11:17:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:53.083 11:17:01 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:25:53.083 11:17:01 -- common/autotest_common.sh@1254 -- # i=2 00:25:53.083 11:17:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:53.083 11:17:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:53.083 11:17:01 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:25:53.083 11:17:01 -- common/autotest_common.sh@1254 -- # i=3 00:25:53.083 11:17:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:25:53.372 11:17:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:53.372 11:17:01 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:53.372 11:17:01 -- common/autotest_common.sh@1262 -- # return 0 00:25:53.372 11:17:01 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:53.372 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.372 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.372 Malloc1 00:25:53.372 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.372 11:17:01 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:53.372 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.372 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.372 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.372 11:17:01 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:53.372 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.372 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.372 [ 00:25:53.372 { 00:25:53.372 "allow_any_host": true, 00:25:53.372 "hosts": [], 00:25:53.372 "listen_addresses": [], 00:25:53.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:53.372 "subtype": "Discovery" 00:25:53.372 }, 00:25:53.372 { 00:25:53.372 "allow_any_host": true, 00:25:53.372 "hosts": [], 00:25:53.372 "listen_addresses": [ 00:25:53.372 { 00:25:53.372 "adrfam": "IPv4", 00:25:53.372 "traddr": "10.0.0.2", 00:25:53.372 "transport": "TCP", 00:25:53.372 "trsvcid": "4420", 00:25:53.372 "trtype": "TCP" 00:25:53.372 } 00:25:53.372 ], 00:25:53.372 "max_cntlid": 65519, 00:25:53.372 "max_namespaces": 2, 00:25:53.372 "min_cntlid": 1, 00:25:53.372 "model_number": "SPDK bdev Controller", 00:25:53.372 "namespaces": [ 00:25:53.372 { 00:25:53.372 "bdev_name": "Malloc0", 00:25:53.372 "name": "Malloc0", 00:25:53.372 "nguid": "94F64E6799F74964B2B7CC668A81ECA1", 00:25:53.372 "nsid": 1, 00:25:53.372 "uuid": "94f64e67-99f7-4964-b2b7-cc668a81eca1" 00:25:53.372 }, 00:25:53.372 { 00:25:53.372 "bdev_name": "Malloc1", 00:25:53.372 "name": "Malloc1", 00:25:53.372 "nguid": "B02E9C20FFDC4D3D8BA90FBD0A50EEAB", 00:25:53.372 "nsid": 2, 00:25:53.372 "uuid": "b02e9c20-ffdc-4d3d-8ba9-0fbd0a50eeab" 00:25:53.372 } 00:25:53.372 ], 00:25:53.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.372 "serial_number": "SPDK00000000000001", 00:25:53.372 "subtype": "NVMe" 00:25:53.372 } 00:25:53.372 ] 00:25:53.372 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.372 11:17:01 -- host/aer.sh@43 -- # wait 81968 00:25:53.373 Asynchronous Event Request test 00:25:53.373 Attaching to 10.0.0.2 00:25:53.373 Attached to 10.0.0.2 00:25:53.373 Registering asynchronous event callbacks... 00:25:53.373 Starting namespace attribute notice tests for all controllers... 00:25:53.373 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:53.373 aer_cb - Changed Namespace 00:25:53.373 Cleaning up... 00:25:53.373 11:17:01 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:53.373 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.373 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.629 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.629 11:17:01 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:53.629 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.629 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.886 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.886 11:17:01 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.886 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.886 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:53.886 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.886 11:17:01 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:53.886 11:17:01 -- host/aer.sh@51 -- # nvmftestfini 00:25:53.886 11:17:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:53.886 11:17:01 -- nvmf/common.sh@117 -- # sync 00:25:53.886 11:17:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.886 11:17:01 -- nvmf/common.sh@120 -- # set +e 00:25:53.886 11:17:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.886 11:17:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.886 rmmod nvme_tcp 00:25:53.886 rmmod nvme_fabrics 00:25:53.886 rmmod nvme_keyring 00:25:53.886 11:17:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.886 11:17:01 -- nvmf/common.sh@124 -- # set -e 00:25:53.886 11:17:01 -- nvmf/common.sh@125 -- # return 0 00:25:53.886 11:17:01 -- nvmf/common.sh@478 -- # '[' -n 81914 ']' 00:25:53.886 11:17:01 -- nvmf/common.sh@479 -- # killprocess 81914 00:25:53.886 11:17:01 -- common/autotest_common.sh@936 -- # '[' -z 81914 ']' 00:25:53.886 11:17:01 -- common/autotest_common.sh@940 -- # kill -0 81914 00:25:53.886 11:17:01 -- common/autotest_common.sh@941 -- # uname 00:25:53.886 11:17:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:53.886 11:17:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81914 00:25:53.886 killing process with pid 81914 00:25:53.886 11:17:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:53.886 11:17:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:53.886 11:17:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81914' 00:25:53.886 11:17:01 -- common/autotest_common.sh@955 -- # kill 81914 00:25:53.886 [2024-04-18 11:17:01.996201] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:53.886 11:17:01 -- common/autotest_common.sh@960 -- # wait 81914 00:25:55.259 11:17:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:55.259 11:17:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:55.259 11:17:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:55.259 11:17:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.259 11:17:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.259 11:17:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.259 11:17:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:55.259 00:25:55.259 real 0m3.998s 00:25:55.259 user 0m10.791s 00:25:55.259 sys 0m0.935s 00:25:55.259 11:17:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:55.259 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.259 ************************************ 00:25:55.259 END TEST nvmf_aer 00:25:55.259 ************************************ 00:25:55.259 11:17:03 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:55.259 11:17:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:55.259 11:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:55.259 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.259 ************************************ 00:25:55.259 START TEST nvmf_async_init 00:25:55.259 ************************************ 00:25:55.259 11:17:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:55.259 * Looking for test storage... 00:25:55.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:55.259 11:17:03 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:55.259 11:17:03 -- nvmf/common.sh@7 -- # uname -s 00:25:55.259 11:17:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.259 11:17:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.259 11:17:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.259 11:17:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.259 11:17:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.259 11:17:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.259 11:17:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.259 11:17:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.259 11:17:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.259 11:17:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:55.259 11:17:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:55.259 11:17:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.259 11:17:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.259 11:17:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:55.259 11:17:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.259 11:17:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.259 11:17:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.259 11:17:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.259 11:17:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.259 11:17:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.259 11:17:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.259 11:17:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.259 11:17:03 -- paths/export.sh@5 -- # export PATH 00:25:55.259 11:17:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.259 11:17:03 -- nvmf/common.sh@47 -- # : 0 00:25:55.259 11:17:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.259 11:17:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.259 11:17:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.259 11:17:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.259 11:17:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.259 11:17:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.259 11:17:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.259 11:17:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.259 11:17:03 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:55.259 11:17:03 -- host/async_init.sh@14 -- # null_block_size=512 00:25:55.259 11:17:03 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:55.259 11:17:03 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:55.259 11:17:03 -- host/async_init.sh@20 -- # uuidgen 00:25:55.259 11:17:03 -- host/async_init.sh@20 -- # tr -d - 00:25:55.259 11:17:03 -- host/async_init.sh@20 -- # nguid=e12aeb56e7594c6a8ea9d80abd6b232b 00:25:55.259 11:17:03 -- host/async_init.sh@22 -- # nvmftestinit 00:25:55.259 11:17:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:55.259 11:17:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.259 11:17:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:55.259 11:17:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:55.259 11:17:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:55.259 11:17:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.259 11:17:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.259 11:17:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.259 11:17:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:55.259 11:17:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:55.259 11:17:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.259 11:17:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.259 11:17:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:55.259 11:17:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:55.259 11:17:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:55.259 11:17:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:55.259 11:17:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:55.259 11:17:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.259 11:17:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:55.259 11:17:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:55.259 11:17:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:55.259 11:17:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:55.259 11:17:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:55.259 11:17:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:55.517 Cannot find device "nvmf_tgt_br" 00:25:55.517 11:17:03 -- nvmf/common.sh@155 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:55.517 Cannot find device "nvmf_tgt_br2" 00:25:55.517 11:17:03 -- nvmf/common.sh@156 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:55.517 11:17:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:55.517 Cannot find device "nvmf_tgt_br" 00:25:55.517 11:17:03 -- nvmf/common.sh@158 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:55.517 Cannot find device "nvmf_tgt_br2" 00:25:55.517 11:17:03 -- nvmf/common.sh@159 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:55.517 11:17:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:55.517 11:17:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:55.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.517 11:17:03 -- nvmf/common.sh@162 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:55.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.517 11:17:03 -- nvmf/common.sh@163 -- # true 00:25:55.517 11:17:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:55.517 11:17:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:55.517 11:17:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:55.517 11:17:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:55.517 11:17:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:55.517 11:17:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:55.517 11:17:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:55.517 11:17:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:55.517 11:17:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:55.517 11:17:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:55.517 11:17:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:55.517 11:17:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:55.517 11:17:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:55.517 11:17:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:55.517 11:17:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:55.517 11:17:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:55.517 11:17:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:55.517 11:17:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:55.517 11:17:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:55.517 11:17:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:55.776 11:17:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:55.776 11:17:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:55.776 11:17:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:55.776 11:17:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:55.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:25:55.776 00:25:55.776 --- 10.0.0.2 ping statistics --- 00:25:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.776 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:25:55.776 11:17:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:55.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:55.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:25:55.776 00:25:55.776 --- 10.0.0.3 ping statistics --- 00:25:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.776 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:55.776 11:17:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:55.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:55.776 00:25:55.776 --- 10.0.0.1 ping statistics --- 00:25:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.776 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:55.776 11:17:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.776 11:17:03 -- nvmf/common.sh@422 -- # return 0 00:25:55.776 11:17:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:55.776 11:17:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.776 11:17:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:55.776 11:17:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:55.776 11:17:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.776 11:17:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:55.776 11:17:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:55.776 11:17:03 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:55.776 11:17:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:55.776 11:17:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:55.776 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.776 11:17:03 -- nvmf/common.sh@470 -- # nvmfpid=82160 00:25:55.776 11:17:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:55.776 11:17:03 -- nvmf/common.sh@471 -- # waitforlisten 82160 00:25:55.776 11:17:03 -- common/autotest_common.sh@817 -- # '[' -z 82160 ']' 00:25:55.776 11:17:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.776 11:17:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.776 11:17:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.776 11:17:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.776 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:25:55.776 [2024-04-18 11:17:03.922289] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:55.776 [2024-04-18 11:17:03.922666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.034 [2024-04-18 11:17:04.103302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.292 [2024-04-18 11:17:04.392970] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.292 [2024-04-18 11:17:04.393325] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.292 [2024-04-18 11:17:04.393537] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.292 [2024-04-18 11:17:04.393740] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.292 [2024-04-18 11:17:04.393966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.292 [2024-04-18 11:17:04.394164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.885 11:17:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:56.885 11:17:04 -- common/autotest_common.sh@850 -- # return 0 00:25:56.885 11:17:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:56.885 11:17:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 11:17:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.885 11:17:04 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 [2024-04-18 11:17:04.923318] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 null0 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e12aeb56e7594c6a8ea9d80abd6b232b 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.885 [2024-04-18 11:17:04.971822] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.885 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.885 11:17:04 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:56.885 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.885 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:57.146 nvme0n1 00:25:57.146 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.146 11:17:05 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.146 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.146 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.146 [ 00:25:57.146 { 00:25:57.146 "aliases": [ 00:25:57.146 "e12aeb56-e759-4c6a-8ea9-d80abd6b232b" 00:25:57.146 ], 00:25:57.146 "assigned_rate_limits": { 00:25:57.146 "r_mbytes_per_sec": 0, 00:25:57.146 "rw_ios_per_sec": 0, 00:25:57.146 "rw_mbytes_per_sec": 0, 00:25:57.146 "w_mbytes_per_sec": 0 00:25:57.146 }, 00:25:57.146 "block_size": 512, 00:25:57.146 "claimed": false, 00:25:57.146 "driver_specific": { 00:25:57.146 "mp_policy": "active_passive", 00:25:57.146 "nvme": [ 00:25:57.146 { 00:25:57.146 "ctrlr_data": { 00:25:57.146 "ana_reporting": false, 00:25:57.146 "cntlid": 1, 00:25:57.146 "firmware_revision": "24.05", 00:25:57.146 "model_number": "SPDK bdev Controller", 00:25:57.146 "multi_ctrlr": true, 00:25:57.146 "oacs": { 00:25:57.146 "firmware": 0, 00:25:57.146 "format": 0, 00:25:57.146 "ns_manage": 0, 00:25:57.146 "security": 0 00:25:57.146 }, 00:25:57.146 "serial_number": "00000000000000000000", 00:25:57.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.146 "vendor_id": "0x8086" 00:25:57.146 }, 00:25:57.146 "ns_data": { 00:25:57.146 "can_share": true, 00:25:57.146 "id": 1 00:25:57.146 }, 00:25:57.146 "trid": { 00:25:57.146 "adrfam": "IPv4", 00:25:57.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.146 "traddr": "10.0.0.2", 00:25:57.146 "trsvcid": "4420", 00:25:57.146 "trtype": "TCP" 00:25:57.146 }, 00:25:57.146 "vs": { 00:25:57.146 "nvme_version": "1.3" 00:25:57.146 } 00:25:57.146 } 00:25:57.146 ] 00:25:57.146 }, 00:25:57.146 "memory_domains": [ 00:25:57.146 { 00:25:57.146 "dma_device_id": "system", 00:25:57.146 "dma_device_type": 1 00:25:57.146 } 00:25:57.146 ], 00:25:57.146 "name": "nvme0n1", 00:25:57.146 "num_blocks": 2097152, 00:25:57.146 "product_name": "NVMe disk", 00:25:57.146 "supported_io_types": { 00:25:57.146 "abort": true, 00:25:57.146 "compare": true, 00:25:57.146 "compare_and_write": true, 00:25:57.146 "flush": true, 00:25:57.146 "nvme_admin": true, 00:25:57.146 "nvme_io": true, 00:25:57.146 "read": true, 00:25:57.146 "reset": true, 00:25:57.146 "unmap": false, 00:25:57.146 "write": true, 00:25:57.146 "write_zeroes": true 00:25:57.146 }, 00:25:57.146 "uuid": "e12aeb56-e759-4c6a-8ea9-d80abd6b232b", 00:25:57.146 "zoned": false 00:25:57.146 } 00:25:57.146 ] 00:25:57.146 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.146 11:17:05 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:57.146 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.146 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.146 [2024-04-18 11:17:05.246005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.146 [2024-04-18 11:17:05.246364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:25:57.405 [2024-04-18 11:17:05.389504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.405 [ 00:25:57.405 { 00:25:57.405 "aliases": [ 00:25:57.405 "e12aeb56-e759-4c6a-8ea9-d80abd6b232b" 00:25:57.405 ], 00:25:57.405 "assigned_rate_limits": { 00:25:57.405 "r_mbytes_per_sec": 0, 00:25:57.405 "rw_ios_per_sec": 0, 00:25:57.405 "rw_mbytes_per_sec": 0, 00:25:57.405 "w_mbytes_per_sec": 0 00:25:57.405 }, 00:25:57.405 "block_size": 512, 00:25:57.405 "claimed": false, 00:25:57.405 "driver_specific": { 00:25:57.405 "mp_policy": "active_passive", 00:25:57.405 "nvme": [ 00:25:57.405 { 00:25:57.405 "ctrlr_data": { 00:25:57.405 "ana_reporting": false, 00:25:57.405 "cntlid": 2, 00:25:57.405 "firmware_revision": "24.05", 00:25:57.405 "model_number": "SPDK bdev Controller", 00:25:57.405 "multi_ctrlr": true, 00:25:57.405 "oacs": { 00:25:57.405 "firmware": 0, 00:25:57.405 "format": 0, 00:25:57.405 "ns_manage": 0, 00:25:57.405 "security": 0 00:25:57.405 }, 00:25:57.405 "serial_number": "00000000000000000000", 00:25:57.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.405 "vendor_id": "0x8086" 00:25:57.405 }, 00:25:57.405 "ns_data": { 00:25:57.405 "can_share": true, 00:25:57.405 "id": 1 00:25:57.405 }, 00:25:57.405 "trid": { 00:25:57.405 "adrfam": "IPv4", 00:25:57.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.405 "traddr": "10.0.0.2", 00:25:57.405 "trsvcid": "4420", 00:25:57.405 "trtype": "TCP" 00:25:57.405 }, 00:25:57.405 "vs": { 00:25:57.405 "nvme_version": "1.3" 00:25:57.405 } 00:25:57.405 } 00:25:57.405 ] 00:25:57.405 }, 00:25:57.405 "memory_domains": [ 00:25:57.405 { 00:25:57.405 "dma_device_id": "system", 00:25:57.405 "dma_device_type": 1 00:25:57.405 } 00:25:57.405 ], 00:25:57.405 "name": "nvme0n1", 00:25:57.405 "num_blocks": 2097152, 00:25:57.405 "product_name": "NVMe disk", 00:25:57.405 "supported_io_types": { 00:25:57.405 "abort": true, 00:25:57.405 "compare": true, 00:25:57.405 "compare_and_write": true, 00:25:57.405 "flush": true, 00:25:57.405 "nvme_admin": true, 00:25:57.405 "nvme_io": true, 00:25:57.405 "read": true, 00:25:57.405 "reset": true, 00:25:57.405 "unmap": false, 00:25:57.405 "write": true, 00:25:57.405 "write_zeroes": true 00:25:57.405 }, 00:25:57.405 "uuid": "e12aeb56-e759-4c6a-8ea9-d80abd6b232b", 00:25:57.405 "zoned": false 00:25:57.405 } 00:25:57.405 ] 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@53 -- # mktemp 00:25:57.405 11:17:05 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TfXUgf7D77 00:25:57.405 11:17:05 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:57.405 11:17:05 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TfXUgf7D77 00:25:57.405 11:17:05 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.405 [2024-04-18 11:17:05.462234] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:57.405 [2024-04-18 11:17:05.462602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfXUgf7D77 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.405 [2024-04-18 11:17:05.470251] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:57.405 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.405 11:17:05 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfXUgf7D77 00:25:57.405 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.405 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.406 [2024-04-18 11:17:05.478195] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.406 [2024-04-18 11:17:05.478342] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:57.406 nvme0n1 00:25:57.406 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.406 11:17:05 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.406 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.406 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.406 [ 00:25:57.406 { 00:25:57.406 "aliases": [ 00:25:57.406 "e12aeb56-e759-4c6a-8ea9-d80abd6b232b" 00:25:57.406 ], 00:25:57.406 "assigned_rate_limits": { 00:25:57.406 "r_mbytes_per_sec": 0, 00:25:57.406 "rw_ios_per_sec": 0, 00:25:57.406 "rw_mbytes_per_sec": 0, 00:25:57.406 "w_mbytes_per_sec": 0 00:25:57.406 }, 00:25:57.406 "block_size": 512, 00:25:57.406 "claimed": false, 00:25:57.406 "driver_specific": { 00:25:57.406 "mp_policy": "active_passive", 00:25:57.406 "nvme": [ 00:25:57.406 { 00:25:57.406 "ctrlr_data": { 00:25:57.406 "ana_reporting": false, 00:25:57.406 "cntlid": 3, 00:25:57.406 "firmware_revision": "24.05", 00:25:57.406 "model_number": "SPDK bdev Controller", 00:25:57.406 "multi_ctrlr": true, 00:25:57.406 "oacs": { 00:25:57.406 "firmware": 0, 00:25:57.406 "format": 0, 00:25:57.406 "ns_manage": 0, 00:25:57.406 "security": 0 00:25:57.406 }, 00:25:57.406 "serial_number": "00000000000000000000", 00:25:57.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.406 "vendor_id": "0x8086" 00:25:57.406 }, 00:25:57.406 "ns_data": { 00:25:57.406 "can_share": true, 00:25:57.406 "id": 1 00:25:57.406 }, 00:25:57.406 "trid": { 00:25:57.406 "adrfam": "IPv4", 00:25:57.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.406 "traddr": "10.0.0.2", 00:25:57.406 "trsvcid": "4421", 00:25:57.406 "trtype": "TCP" 00:25:57.406 }, 00:25:57.406 "vs": { 00:25:57.406 "nvme_version": "1.3" 00:25:57.406 } 00:25:57.406 } 00:25:57.406 ] 00:25:57.406 }, 00:25:57.406 "memory_domains": [ 00:25:57.406 { 00:25:57.406 "dma_device_id": "system", 00:25:57.406 "dma_device_type": 1 00:25:57.406 } 00:25:57.406 ], 00:25:57.406 "name": "nvme0n1", 00:25:57.406 "num_blocks": 2097152, 00:25:57.406 "product_name": "NVMe disk", 00:25:57.406 "supported_io_types": { 00:25:57.406 "abort": true, 00:25:57.406 "compare": true, 00:25:57.406 "compare_and_write": true, 00:25:57.406 "flush": true, 00:25:57.406 "nvme_admin": true, 00:25:57.406 "nvme_io": true, 00:25:57.406 "read": true, 00:25:57.406 "reset": true, 00:25:57.406 "unmap": false, 00:25:57.406 "write": true, 00:25:57.406 "write_zeroes": true 00:25:57.406 }, 00:25:57.406 "uuid": "e12aeb56-e759-4c6a-8ea9-d80abd6b232b", 00:25:57.406 "zoned": false 00:25:57.406 } 00:25:57.406 ] 00:25:57.406 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.406 11:17:05 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.406 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.406 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:57.406 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.406 11:17:05 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TfXUgf7D77 00:25:57.406 11:17:05 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:57.406 11:17:05 -- host/async_init.sh@78 -- # nvmftestfini 00:25:57.406 11:17:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:57.406 11:17:05 -- nvmf/common.sh@117 -- # sync 00:25:57.665 11:17:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.665 11:17:05 -- nvmf/common.sh@120 -- # set +e 00:25:57.665 11:17:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.665 11:17:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.665 rmmod nvme_tcp 00:25:57.665 rmmod nvme_fabrics 00:25:57.665 rmmod nvme_keyring 00:25:57.665 11:17:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.665 11:17:05 -- nvmf/common.sh@124 -- # set -e 00:25:57.665 11:17:05 -- nvmf/common.sh@125 -- # return 0 00:25:57.665 11:17:05 -- nvmf/common.sh@478 -- # '[' -n 82160 ']' 00:25:57.665 11:17:05 -- nvmf/common.sh@479 -- # killprocess 82160 00:25:57.665 11:17:05 -- common/autotest_common.sh@936 -- # '[' -z 82160 ']' 00:25:57.665 11:17:05 -- common/autotest_common.sh@940 -- # kill -0 82160 00:25:57.665 11:17:05 -- common/autotest_common.sh@941 -- # uname 00:25:57.665 11:17:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:57.665 11:17:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82160 00:25:57.665 11:17:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:57.665 killing process with pid 82160 00:25:57.665 11:17:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:57.665 11:17:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82160' 00:25:57.665 11:17:05 -- common/autotest_common.sh@955 -- # kill 82160 00:25:57.665 [2024-04-18 11:17:05.764906] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:57.665 [2024-04-18 11:17:05.764974] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:57.665 11:17:05 -- common/autotest_common.sh@960 -- # wait 82160 00:25:59.041 11:17:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:59.041 11:17:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:59.041 11:17:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:59.041 11:17:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.041 11:17:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.041 11:17:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.041 11:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.041 11:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.041 11:17:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:59.041 ************************************ 00:25:59.041 END TEST nvmf_async_init 00:25:59.041 ************************************ 00:25:59.041 00:25:59.041 real 0m3.744s 00:25:59.041 user 0m3.467s 00:25:59.041 sys 0m0.744s 00:25:59.041 11:17:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.041 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.041 11:17:07 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:59.041 11:17:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:59.041 11:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.041 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.041 ************************************ 00:25:59.041 START TEST dma 00:25:59.041 ************************************ 00:25:59.041 11:17:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:59.300 * Looking for test storage... 00:25:59.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:59.300 11:17:07 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.300 11:17:07 -- nvmf/common.sh@7 -- # uname -s 00:25:59.300 11:17:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.300 11:17:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.300 11:17:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.300 11:17:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.300 11:17:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.300 11:17:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.300 11:17:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.300 11:17:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.300 11:17:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.300 11:17:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.300 11:17:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:59.300 11:17:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:59.300 11:17:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.300 11:17:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.300 11:17:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.300 11:17:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.300 11:17:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.300 11:17:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.300 11:17:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.300 11:17:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.300 11:17:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.300 11:17:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.300 11:17:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.300 11:17:07 -- paths/export.sh@5 -- # export PATH 00:25:59.300 11:17:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.300 11:17:07 -- nvmf/common.sh@47 -- # : 0 00:25:59.300 11:17:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.300 11:17:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.300 11:17:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.300 11:17:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.300 11:17:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.300 11:17:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.300 11:17:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.300 11:17:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.300 11:17:07 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:59.300 11:17:07 -- host/dma.sh@13 -- # exit 0 00:25:59.300 00:25:59.300 real 0m0.109s 00:25:59.300 user 0m0.056s 00:25:59.300 sys 0m0.057s 00:25:59.300 ************************************ 00:25:59.300 END TEST dma 00:25:59.300 ************************************ 00:25:59.300 11:17:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.300 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.300 11:17:07 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:59.300 11:17:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:59.300 11:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.300 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.300 ************************************ 00:25:59.300 START TEST nvmf_identify 00:25:59.300 ************************************ 00:25:59.300 11:17:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:59.300 * Looking for test storage... 00:25:59.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:59.300 11:17:07 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.300 11:17:07 -- nvmf/common.sh@7 -- # uname -s 00:25:59.300 11:17:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.300 11:17:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.300 11:17:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.300 11:17:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.300 11:17:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.300 11:17:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.300 11:17:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.301 11:17:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.301 11:17:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.301 11:17:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.559 11:17:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:59.559 11:17:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:25:59.559 11:17:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.559 11:17:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.559 11:17:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.559 11:17:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.559 11:17:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.559 11:17:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.559 11:17:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.559 11:17:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.559 11:17:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.559 11:17:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.559 11:17:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.559 11:17:07 -- paths/export.sh@5 -- # export PATH 00:25:59.559 11:17:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.559 11:17:07 -- nvmf/common.sh@47 -- # : 0 00:25:59.559 11:17:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.559 11:17:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.559 11:17:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.559 11:17:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.559 11:17:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.559 11:17:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.559 11:17:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.559 11:17:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.559 11:17:07 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.559 11:17:07 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.559 11:17:07 -- host/identify.sh@14 -- # nvmftestinit 00:25:59.559 11:17:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:59.559 11:17:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.559 11:17:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:59.559 11:17:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:59.559 11:17:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:59.559 11:17:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.559 11:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.559 11:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.559 11:17:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:59.559 11:17:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:59.559 11:17:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:59.559 11:17:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:59.559 11:17:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:59.559 11:17:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:59.559 11:17:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.559 11:17:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.559 11:17:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:59.559 11:17:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:59.559 11:17:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:59.559 11:17:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:59.559 11:17:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:59.559 11:17:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.559 11:17:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:59.559 11:17:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:59.559 11:17:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:59.559 11:17:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:59.560 11:17:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:59.560 11:17:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:59.560 Cannot find device "nvmf_tgt_br" 00:25:59.560 11:17:07 -- nvmf/common.sh@155 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:59.560 Cannot find device "nvmf_tgt_br2" 00:25:59.560 11:17:07 -- nvmf/common.sh@156 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:59.560 11:17:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:59.560 Cannot find device "nvmf_tgt_br" 00:25:59.560 11:17:07 -- nvmf/common.sh@158 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:59.560 Cannot find device "nvmf_tgt_br2" 00:25:59.560 11:17:07 -- nvmf/common.sh@159 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:59.560 11:17:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:59.560 11:17:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:59.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.560 11:17:07 -- nvmf/common.sh@162 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:59.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.560 11:17:07 -- nvmf/common.sh@163 -- # true 00:25:59.560 11:17:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:59.560 11:17:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:59.560 11:17:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:59.560 11:17:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:59.560 11:17:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:59.560 11:17:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:59.560 11:17:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:59.560 11:17:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:59.560 11:17:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:59.560 11:17:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:59.560 11:17:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:59.560 11:17:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:59.560 11:17:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:59.560 11:17:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:59.818 11:17:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:59.818 11:17:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:59.818 11:17:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:59.818 11:17:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:59.818 11:17:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:59.818 11:17:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:59.818 11:17:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:59.818 11:17:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:59.818 11:17:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:59.818 11:17:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:59.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:59.818 00:25:59.818 --- 10.0.0.2 ping statistics --- 00:25:59.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.818 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:59.818 11:17:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:59.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:59.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:25:59.818 00:25:59.818 --- 10.0.0.3 ping statistics --- 00:25:59.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.818 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:59.818 11:17:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:59.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:25:59.818 00:25:59.818 --- 10.0.0.1 ping statistics --- 00:25:59.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.818 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:59.818 11:17:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.818 11:17:07 -- nvmf/common.sh@422 -- # return 0 00:25:59.818 11:17:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:59.818 11:17:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.818 11:17:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:59.818 11:17:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:59.818 11:17:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.818 11:17:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:59.818 11:17:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:59.818 11:17:07 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:59.818 11:17:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:59.818 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.818 11:17:07 -- host/identify.sh@19 -- # nvmfpid=82452 00:25:59.818 11:17:07 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.818 11:17:07 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.818 11:17:07 -- host/identify.sh@23 -- # waitforlisten 82452 00:25:59.818 11:17:07 -- common/autotest_common.sh@817 -- # '[' -z 82452 ']' 00:25:59.818 11:17:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.818 11:17:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.818 11:17:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.818 11:17:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.818 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:59.818 [2024-04-18 11:17:08.017065] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:59.818 [2024-04-18 11:17:08.017263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.076 [2024-04-18 11:17:08.193798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.334 [2024-04-18 11:17:08.487580] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.334 [2024-04-18 11:17:08.487666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.334 [2024-04-18 11:17:08.487692] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.334 [2024-04-18 11:17:08.487709] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.334 [2024-04-18 11:17:08.487727] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.334 [2024-04-18 11:17:08.487960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.334 [2024-04-18 11:17:08.488838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.334 [2024-04-18 11:17:08.488953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.334 [2024-04-18 11:17:08.488968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.900 11:17:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:00.900 11:17:08 -- common/autotest_common.sh@850 -- # return 0 00:26:00.900 11:17:08 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.900 11:17:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.900 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:26:00.900 [2024-04-18 11:17:08.962238] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.900 11:17:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.900 11:17:08 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:00.900 11:17:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:00.900 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:26:00.900 11:17:09 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.900 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.900 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:00.900 Malloc0 00:26:00.900 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.900 11:17:09 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.900 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.900 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:00.900 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.900 11:17:09 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:00.900 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.900 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.159 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.159 11:17:09 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.159 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.159 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.159 [2024-04-18 11:17:09.125978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.159 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.159 11:17:09 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:01.159 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.159 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.159 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.159 11:17:09 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:01.159 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.159 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.159 [2024-04-18 11:17:09.141632] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:01.159 [ 00:26:01.159 { 00:26:01.159 "allow_any_host": true, 00:26:01.159 "hosts": [], 00:26:01.159 "listen_addresses": [ 00:26:01.159 { 00:26:01.159 "adrfam": "IPv4", 00:26:01.159 "traddr": "10.0.0.2", 00:26:01.159 "transport": "TCP", 00:26:01.159 "trsvcid": "4420", 00:26:01.159 "trtype": "TCP" 00:26:01.159 } 00:26:01.159 ], 00:26:01.159 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:01.159 "subtype": "Discovery" 00:26:01.159 }, 00:26:01.159 { 00:26:01.159 "allow_any_host": true, 00:26:01.159 "hosts": [], 00:26:01.159 "listen_addresses": [ 00:26:01.159 { 00:26:01.159 "adrfam": "IPv4", 00:26:01.159 "traddr": "10.0.0.2", 00:26:01.159 "transport": "TCP", 00:26:01.159 "trsvcid": "4420", 00:26:01.159 "trtype": "TCP" 00:26:01.159 } 00:26:01.159 ], 00:26:01.159 "max_cntlid": 65519, 00:26:01.159 "max_namespaces": 32, 00:26:01.159 "min_cntlid": 1, 00:26:01.159 "model_number": "SPDK bdev Controller", 00:26:01.159 "namespaces": [ 00:26:01.159 { 00:26:01.159 "bdev_name": "Malloc0", 00:26:01.159 "eui64": "ABCDEF0123456789", 00:26:01.159 "name": "Malloc0", 00:26:01.159 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:01.159 "nsid": 1, 00:26:01.159 "uuid": "6b643b61-1cae-4b50-93c1-22d8e5e5e8cc" 00:26:01.159 } 00:26:01.159 ], 00:26:01.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.159 "serial_number": "SPDK00000000000001", 00:26:01.159 "subtype": "NVMe" 00:26:01.159 } 00:26:01.159 ] 00:26:01.159 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.159 11:17:09 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:01.160 [2024-04-18 11:17:09.202706] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:01.160 [2024-04-18 11:17:09.202814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82505 ] 00:26:01.160 [2024-04-18 11:17:09.368591] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:01.160 [2024-04-18 11:17:09.368752] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:01.160 [2024-04-18 11:17:09.368773] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:01.160 [2024-04-18 11:17:09.368806] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:01.160 [2024-04-18 11:17:09.368827] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:01.160 [2024-04-18 11:17:09.369019] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:01.160 [2024-04-18 11:17:09.369097] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:26:01.160 [2024-04-18 11:17:09.376137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:01.160 [2024-04-18 11:17:09.376175] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:01.160 [2024-04-18 11:17:09.376187] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:01.160 [2024-04-18 11:17:09.376194] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:01.160 [2024-04-18 11:17:09.376308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.160 [2024-04-18 11:17:09.376331] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.160 [2024-04-18 11:17:09.376341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.160 [2024-04-18 11:17:09.376367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:01.160 [2024-04-18 11:17:09.376412] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.420 [2024-04-18 11:17:09.384144] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.420 [2024-04-18 11:17:09.384178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.420 [2024-04-18 11:17:09.384187] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.420 [2024-04-18 11:17:09.384220] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:01.420 [2024-04-18 11:17:09.384238] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:01.420 [2024-04-18 11:17:09.384250] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:01.420 [2024-04-18 11:17:09.384288] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384299] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.420 [2024-04-18 11:17:09.384329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.420 [2024-04-18 11:17:09.384371] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.420 [2024-04-18 11:17:09.384480] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.420 [2024-04-18 11:17:09.384503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.420 [2024-04-18 11:17:09.384512] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384520] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.420 [2024-04-18 11:17:09.384532] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:01.420 [2024-04-18 11:17:09.384551] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:01.420 [2024-04-18 11:17:09.384565] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384574] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.420 [2024-04-18 11:17:09.384582] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.384600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.384639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.384725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.384738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.384744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.384752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.384764] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:01.421 [2024-04-18 11:17:09.384779] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.384793] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.384811] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.384819] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.384833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.384866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.384939] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.384951] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.384961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.384969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.384980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.384998] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.385029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.385062] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.385154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.385168] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.385175] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385182] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.385191] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:01.421 [2024-04-18 11:17:09.385202] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.385216] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.385326] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:01.421 [2024-04-18 11:17:09.385346] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.385364] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.385441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.385474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.385555] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.385570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.385577] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385584] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.385594] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:01.421 [2024-04-18 11:17:09.385619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385629] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385636] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.385651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.385679] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.385765] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.385777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.385783] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385790] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.385799] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:01.421 [2024-04-18 11:17:09.385810] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:01.421 [2024-04-18 11:17:09.385839] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:01.421 [2024-04-18 11:17:09.385863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:01.421 [2024-04-18 11:17:09.385888] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.385903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.385919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.421 [2024-04-18 11:17:09.385952] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.386092] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.421 [2024-04-18 11:17:09.386130] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.421 [2024-04-18 11:17:09.386140] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386148] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:26:01.421 [2024-04-18 11:17:09.386157] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.421 [2024-04-18 11:17:09.386166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386181] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386190] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.386225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.386232] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386239] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.386259] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:01.421 [2024-04-18 11:17:09.386287] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:01.421 [2024-04-18 11:17:09.386295] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:01.421 [2024-04-18 11:17:09.386307] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:01.421 [2024-04-18 11:17:09.386317] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:01.421 [2024-04-18 11:17:09.386326] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:01.421 [2024-04-18 11:17:09.386341] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:01.421 [2024-04-18 11:17:09.386361] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386370] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386380] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.386399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:01.421 [2024-04-18 11:17:09.386434] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.421 [2024-04-18 11:17:09.386529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.421 [2024-04-18 11:17:09.386544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.421 [2024-04-18 11:17:09.386551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386559] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.421 [2024-04-18 11:17:09.386573] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386581] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386588] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.386605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.421 [2024-04-18 11:17:09.386617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386624] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386630] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:26:01.421 [2024-04-18 11:17:09.386641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.421 [2024-04-18 11:17:09.386654] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.421 [2024-04-18 11:17:09.386661] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.386667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.386678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.422 [2024-04-18 11:17:09.386690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.386697] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.386704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.386714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.422 [2024-04-18 11:17:09.386723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:01.422 [2024-04-18 11:17:09.386741] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:01.422 [2024-04-18 11:17:09.386755] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.386762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.386782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.422 [2024-04-18 11:17:09.386815] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.422 [2024-04-18 11:17:09.386828] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:26:01.422 [2024-04-18 11:17:09.386836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:26:01.422 [2024-04-18 11:17:09.386844] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.422 [2024-04-18 11:17:09.386852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.422 [2024-04-18 11:17:09.386975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.386987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.386994] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387001] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.422 [2024-04-18 11:17:09.387017] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:01.422 [2024-04-18 11:17:09.387028] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:01.422 [2024-04-18 11:17:09.387051] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387060] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.387081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.422 [2024-04-18 11:17:09.387126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.422 [2024-04-18 11:17:09.387231] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.422 [2024-04-18 11:17:09.387247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.422 [2024-04-18 11:17:09.387254] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387266] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:26:01.422 [2024-04-18 11:17:09.387274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.422 [2024-04-18 11:17:09.387283] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387295] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387303] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387317] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.387327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.387333] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.422 [2024-04-18 11:17:09.387372] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:01.422 [2024-04-18 11:17:09.387435] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387452] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.387468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.422 [2024-04-18 11:17:09.387481] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387488] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387495] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.387511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.422 [2024-04-18 11:17:09.387548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.422 [2024-04-18 11:17:09.387561] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.422 [2024-04-18 11:17:09.387895] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.422 [2024-04-18 11:17:09.387920] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.422 [2024-04-18 11:17:09.387929] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387937] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:26:01.422 [2024-04-18 11:17:09.387953] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:26:01.422 [2024-04-18 11:17:09.387962] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387977] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387985] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.387995] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.388005] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.388010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.388018] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.422 [2024-04-18 11:17:09.432141] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.432199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.432210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.422 [2024-04-18 11:17:09.432275] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432289] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.432313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.422 [2024-04-18 11:17:09.432365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.422 [2024-04-18 11:17:09.432563] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.422 [2024-04-18 11:17:09.432575] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.422 [2024-04-18 11:17:09.432581] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432589] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:26:01.422 [2024-04-18 11:17:09.432598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:26:01.422 [2024-04-18 11:17:09.432608] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432624] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432632] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.432658] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.432664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.422 [2024-04-18 11:17:09.432694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432713] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.422 [2024-04-18 11:17:09.432728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.422 [2024-04-18 11:17:09.432768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.422 [2024-04-18 11:17:09.432900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.422 [2024-04-18 11:17:09.432913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.422 [2024-04-18 11:17:09.432919] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432927] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:26:01.422 [2024-04-18 11:17:09.432935] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:26:01.422 [2024-04-18 11:17:09.432942] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432960] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.432968] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.474207] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.422 [2024-04-18 11:17:09.474251] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.422 [2024-04-18 11:17:09.474261] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.422 [2024-04-18 11:17:09.474270] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.422 ===================================================== 00:26:01.422 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:01.422 ===================================================== 00:26:01.422 Controller Capabilities/Features 00:26:01.422 ================================ 00:26:01.422 Vendor ID: 0000 00:26:01.422 Subsystem Vendor ID: 0000 00:26:01.422 Serial Number: .................... 00:26:01.422 Model Number: ........................................ 00:26:01.422 Firmware Version: 24.05 00:26:01.422 Recommended Arb Burst: 0 00:26:01.423 IEEE OUI Identifier: 00 00 00 00:26:01.423 Multi-path I/O 00:26:01.423 May have multiple subsystem ports: No 00:26:01.423 May have multiple controllers: No 00:26:01.423 Associated with SR-IOV VF: No 00:26:01.423 Max Data Transfer Size: 131072 00:26:01.423 Max Number of Namespaces: 0 00:26:01.423 Max Number of I/O Queues: 1024 00:26:01.423 NVMe Specification Version (VS): 1.3 00:26:01.423 NVMe Specification Version (Identify): 1.3 00:26:01.423 Maximum Queue Entries: 128 00:26:01.423 Contiguous Queues Required: Yes 00:26:01.423 Arbitration Mechanisms Supported 00:26:01.423 Weighted Round Robin: Not Supported 00:26:01.423 Vendor Specific: Not Supported 00:26:01.423 Reset Timeout: 15000 ms 00:26:01.423 Doorbell Stride: 4 bytes 00:26:01.423 NVM Subsystem Reset: Not Supported 00:26:01.423 Command Sets Supported 00:26:01.423 NVM Command Set: Supported 00:26:01.423 Boot Partition: Not Supported 00:26:01.423 Memory Page Size Minimum: 4096 bytes 00:26:01.423 Memory Page Size Maximum: 4096 bytes 00:26:01.423 Persistent Memory Region: Not Supported 00:26:01.423 Optional Asynchronous Events Supported 00:26:01.423 Namespace Attribute Notices: Not Supported 00:26:01.423 Firmware Activation Notices: Not Supported 00:26:01.423 ANA Change Notices: Not Supported 00:26:01.423 PLE Aggregate Log Change Notices: Not Supported 00:26:01.423 LBA Status Info Alert Notices: Not Supported 00:26:01.423 EGE Aggregate Log Change Notices: Not Supported 00:26:01.423 Normal NVM Subsystem Shutdown event: Not Supported 00:26:01.423 Zone Descriptor Change Notices: Not Supported 00:26:01.423 Discovery Log Change Notices: Supported 00:26:01.423 Controller Attributes 00:26:01.423 128-bit Host Identifier: Not Supported 00:26:01.423 Non-Operational Permissive Mode: Not Supported 00:26:01.423 NVM Sets: Not Supported 00:26:01.423 Read Recovery Levels: Not Supported 00:26:01.423 Endurance Groups: Not Supported 00:26:01.423 Predictable Latency Mode: Not Supported 00:26:01.423 Traffic Based Keep ALive: Not Supported 00:26:01.423 Namespace Granularity: Not Supported 00:26:01.423 SQ Associations: Not Supported 00:26:01.423 UUID List: Not Supported 00:26:01.423 Multi-Domain Subsystem: Not Supported 00:26:01.423 Fixed Capacity Management: Not Supported 00:26:01.423 Variable Capacity Management: Not Supported 00:26:01.423 Delete Endurance Group: Not Supported 00:26:01.423 Delete NVM Set: Not Supported 00:26:01.423 Extended LBA Formats Supported: Not Supported 00:26:01.423 Flexible Data Placement Supported: Not Supported 00:26:01.423 00:26:01.423 Controller Memory Buffer Support 00:26:01.423 ================================ 00:26:01.423 Supported: No 00:26:01.423 00:26:01.423 Persistent Memory Region Support 00:26:01.423 ================================ 00:26:01.423 Supported: No 00:26:01.423 00:26:01.423 Admin Command Set Attributes 00:26:01.423 ============================ 00:26:01.423 Security Send/Receive: Not Supported 00:26:01.423 Format NVM: Not Supported 00:26:01.423 Firmware Activate/Download: Not Supported 00:26:01.423 Namespace Management: Not Supported 00:26:01.423 Device Self-Test: Not Supported 00:26:01.423 Directives: Not Supported 00:26:01.423 NVMe-MI: Not Supported 00:26:01.423 Virtualization Management: Not Supported 00:26:01.423 Doorbell Buffer Config: Not Supported 00:26:01.423 Get LBA Status Capability: Not Supported 00:26:01.423 Command & Feature Lockdown Capability: Not Supported 00:26:01.423 Abort Command Limit: 1 00:26:01.423 Async Event Request Limit: 4 00:26:01.423 Number of Firmware Slots: N/A 00:26:01.423 Firmware Slot 1 Read-Only: N/A 00:26:01.423 Firmware Activation Without Reset: N/A 00:26:01.423 Multiple Update Detection Support: N/A 00:26:01.423 Firmware Update Granularity: No Information Provided 00:26:01.423 Per-Namespace SMART Log: No 00:26:01.423 Asymmetric Namespace Access Log Page: Not Supported 00:26:01.423 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:01.423 Command Effects Log Page: Not Supported 00:26:01.423 Get Log Page Extended Data: Supported 00:26:01.423 Telemetry Log Pages: Not Supported 00:26:01.423 Persistent Event Log Pages: Not Supported 00:26:01.423 Supported Log Pages Log Page: May Support 00:26:01.423 Commands Supported & Effects Log Page: Not Supported 00:26:01.423 Feature Identifiers & Effects Log Page:May Support 00:26:01.423 NVMe-MI Commands & Effects Log Page: May Support 00:26:01.423 Data Area 4 for Telemetry Log: Not Supported 00:26:01.423 Error Log Page Entries Supported: 128 00:26:01.423 Keep Alive: Not Supported 00:26:01.423 00:26:01.423 NVM Command Set Attributes 00:26:01.423 ========================== 00:26:01.423 Submission Queue Entry Size 00:26:01.423 Max: 1 00:26:01.423 Min: 1 00:26:01.423 Completion Queue Entry Size 00:26:01.423 Max: 1 00:26:01.423 Min: 1 00:26:01.423 Number of Namespaces: 0 00:26:01.423 Compare Command: Not Supported 00:26:01.423 Write Uncorrectable Command: Not Supported 00:26:01.423 Dataset Management Command: Not Supported 00:26:01.423 Write Zeroes Command: Not Supported 00:26:01.423 Set Features Save Field: Not Supported 00:26:01.423 Reservations: Not Supported 00:26:01.423 Timestamp: Not Supported 00:26:01.423 Copy: Not Supported 00:26:01.423 Volatile Write Cache: Not Present 00:26:01.423 Atomic Write Unit (Normal): 1 00:26:01.423 Atomic Write Unit (PFail): 1 00:26:01.423 Atomic Compare & Write Unit: 1 00:26:01.423 Fused Compare & Write: Supported 00:26:01.423 Scatter-Gather List 00:26:01.423 SGL Command Set: Supported 00:26:01.423 SGL Keyed: Supported 00:26:01.423 SGL Bit Bucket Descriptor: Not Supported 00:26:01.423 SGL Metadata Pointer: Not Supported 00:26:01.423 Oversized SGL: Not Supported 00:26:01.423 SGL Metadata Address: Not Supported 00:26:01.423 SGL Offset: Supported 00:26:01.423 Transport SGL Data Block: Not Supported 00:26:01.423 Replay Protected Memory Block: Not Supported 00:26:01.423 00:26:01.423 Firmware Slot Information 00:26:01.423 ========================= 00:26:01.423 Active slot: 0 00:26:01.423 00:26:01.423 00:26:01.423 Error Log 00:26:01.423 ========= 00:26:01.423 00:26:01.423 Active Namespaces 00:26:01.423 ================= 00:26:01.423 Discovery Log Page 00:26:01.423 ================== 00:26:01.423 Generation Counter: 2 00:26:01.423 Number of Records: 2 00:26:01.423 Record Format: 0 00:26:01.423 00:26:01.423 Discovery Log Entry 0 00:26:01.423 ---------------------- 00:26:01.423 Transport Type: 3 (TCP) 00:26:01.423 Address Family: 1 (IPv4) 00:26:01.423 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:01.423 Entry Flags: 00:26:01.423 Duplicate Returned Information: 1 00:26:01.423 Explicit Persistent Connection Support for Discovery: 1 00:26:01.423 Transport Requirements: 00:26:01.423 Secure Channel: Not Required 00:26:01.423 Port ID: 0 (0x0000) 00:26:01.423 Controller ID: 65535 (0xffff) 00:26:01.423 Admin Max SQ Size: 128 00:26:01.423 Transport Service Identifier: 4420 00:26:01.423 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:01.423 Transport Address: 10.0.0.2 00:26:01.423 Discovery Log Entry 1 00:26:01.423 ---------------------- 00:26:01.423 Transport Type: 3 (TCP) 00:26:01.423 Address Family: 1 (IPv4) 00:26:01.423 Subsystem Type: 2 (NVM Subsystem) 00:26:01.423 Entry Flags: 00:26:01.423 Duplicate Returned Information: 0 00:26:01.423 Explicit Persistent Connection Support for Discovery: 0 00:26:01.423 Transport Requirements: 00:26:01.423 Secure Channel: Not Required 00:26:01.423 Port ID: 0 (0x0000) 00:26:01.423 Controller ID: 65535 (0xffff) 00:26:01.423 Admin Max SQ Size: 128 00:26:01.423 Transport Service Identifier: 4420 00:26:01.423 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:01.423 Transport Address: 10.0.0.2 [2024-04-18 11:17:09.474493] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:01.423 [2024-04-18 11:17:09.474527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.423 [2024-04-18 11:17:09.474542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.423 [2024-04-18 11:17:09.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.423 [2024-04-18 11:17:09.474563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.423 [2024-04-18 11:17:09.474591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.423 [2024-04-18 11:17:09.474602] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.423 [2024-04-18 11:17:09.474610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.423 [2024-04-18 11:17:09.474628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.474675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.474784] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.474798] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.474806] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.474814] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.474831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.474839] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.474847] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.474862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.474906] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475027] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475046] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.475054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.475071] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:01.424 [2024-04-18 11:17:09.475082] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:01.424 [2024-04-18 11:17:09.475101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475127] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475135] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.475156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.475193] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475287] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.475293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475301] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.475321] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.475350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.475379] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.475469] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.475494] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475502] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475510] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.475523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.475556] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475631] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475642] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.475649] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475656] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.475674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475682] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.475702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.475729] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.475817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.475842] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475850] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.475857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.475870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.475896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.475984] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.475996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.476002] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.476009] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.476027] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.476036] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.476042] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.476056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.476083] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.480133] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.480158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.480167] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.480174] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.480196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.480205] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.480212] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.424 [2024-04-18 11:17:09.480227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.424 [2024-04-18 11:17:09.480283] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.424 [2024-04-18 11:17:09.480368] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.424 [2024-04-18 11:17:09.480381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.424 [2024-04-18 11:17:09.480387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.424 [2024-04-18 11:17:09.480394] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.424 [2024-04-18 11:17:09.480409] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:26:01.424 00:26:01.424 11:17:09 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:01.424 [2024-04-18 11:17:09.580862] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:01.424 [2024-04-18 11:17:09.580977] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82508 ] 00:26:01.687 [2024-04-18 11:17:09.747600] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:01.687 [2024-04-18 11:17:09.747774] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:01.687 [2024-04-18 11:17:09.747796] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:01.687 [2024-04-18 11:17:09.747830] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:01.687 [2024-04-18 11:17:09.747850] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:01.687 [2024-04-18 11:17:09.748052] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:01.687 [2024-04-18 11:17:09.748146] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:26:01.687 [2024-04-18 11:17:09.755131] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:01.687 [2024-04-18 11:17:09.755178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:01.687 [2024-04-18 11:17:09.755190] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:01.687 [2024-04-18 11:17:09.755199] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:01.687 [2024-04-18 11:17:09.755303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.755326] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.755336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.755364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:01.687 [2024-04-18 11:17:09.755412] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.763153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.763201] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.763212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763223] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.763245] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:01.687 [2024-04-18 11:17:09.763274] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:01.687 [2024-04-18 11:17:09.763292] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:01.687 [2024-04-18 11:17:09.763319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763330] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763338] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.763362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.763409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.763660] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.763688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.763701] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763710] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.763723] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:01.687 [2024-04-18 11:17:09.763739] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:01.687 [2024-04-18 11:17:09.763753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763762] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.763774] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.763793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.763824] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.764282] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.764306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.764315] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.764334] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:01.687 [2024-04-18 11:17:09.764356] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.764377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764387] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.764411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.764446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.764542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.764555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.764561] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764568] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.764579] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.764598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.764635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.764666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.764921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.764934] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.764941] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.764947] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.764962] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:01.687 [2024-04-18 11:17:09.764974] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.764989] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.765100] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:01.687 [2024-04-18 11:17:09.765127] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.765144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.765176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.765215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.765635] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.765661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.765670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.765688] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:01.687 [2024-04-18 11:17:09.765708] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765725] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.765740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.765771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.765866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.765879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.765885] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.765892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.765902] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:01.687 [2024-04-18 11:17:09.765918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:01.687 [2024-04-18 11:17:09.765949] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:01.687 [2024-04-18 11:17:09.765976] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:01.687 [2024-04-18 11:17:09.766005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.687 [2024-04-18 11:17:09.766030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.687 [2024-04-18 11:17:09.766064] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.687 [2024-04-18 11:17:09.766567] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.687 [2024-04-18 11:17:09.766595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.687 [2024-04-18 11:17:09.766604] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766613] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:26:01.687 [2024-04-18 11:17:09.766622] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.687 [2024-04-18 11:17:09.766631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766647] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766655] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766677] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.687 [2024-04-18 11:17:09.766690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.687 [2024-04-18 11:17:09.766697] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.687 [2024-04-18 11:17:09.766704] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.687 [2024-04-18 11:17:09.766727] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:01.687 [2024-04-18 11:17:09.766740] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:01.687 [2024-04-18 11:17:09.766749] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:01.688 [2024-04-18 11:17:09.766763] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:01.688 [2024-04-18 11:17:09.766773] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:01.688 [2024-04-18 11:17:09.766782] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.766799] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.766816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.766826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.766834] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.766854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:01.688 [2024-04-18 11:17:09.766889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.688 [2024-04-18 11:17:09.771138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.771171] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.771180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771188] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.771208] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771239] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771248] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.771275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.688 [2024-04-18 11:17:09.771290] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.771316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.688 [2024-04-18 11:17:09.771327] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771333] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771340] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.771350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.688 [2024-04-18 11:17:09.771361] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.771389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.688 [2024-04-18 11:17:09.771400] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.771424] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.771439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.771447] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.771468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.771510] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:26:01.688 [2024-04-18 11:17:09.771523] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:26:01.688 [2024-04-18 11:17:09.771531] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:26:01.688 [2024-04-18 11:17:09.771539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.688 [2024-04-18 11:17:09.771547] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.772031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.772055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.772063] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.772071] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.772086] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:01.688 [2024-04-18 11:17:09.772098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.772128] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.772142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.772160] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.772170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.772178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.772194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:01.688 [2024-04-18 11:17:09.772227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.772696] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.772720] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.772728] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.772735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.772845] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.772878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.772898] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.772908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.772923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.772959] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.773424] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.688 [2024-04-18 11:17:09.773453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.688 [2024-04-18 11:17:09.773462] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773471] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:26:01.688 [2024-04-18 11:17:09.773479] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.688 [2024-04-18 11:17:09.773488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773503] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773511] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773528] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.773539] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.773546] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773553] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.773599] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:01.688 [2024-04-18 11:17:09.773628] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.773657] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.773677] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.773686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.773708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.773743] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.774097] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.688 [2024-04-18 11:17:09.774131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.688 [2024-04-18 11:17:09.774140] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774147] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:26:01.688 [2024-04-18 11:17:09.774156] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.688 [2024-04-18 11:17:09.774163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774176] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774184] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774197] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.774211] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.774218] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.774267] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.774294] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.774314] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.774339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.774373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.774817] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.688 [2024-04-18 11:17:09.774839] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.688 [2024-04-18 11:17:09.774846] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774853] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:26:01.688 [2024-04-18 11:17:09.774861] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.688 [2024-04-18 11:17:09.774869] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774881] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774888] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.774932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.774939] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.774946] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.774984] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775019] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775031] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775041] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775054] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:01.688 [2024-04-18 11:17:09.775063] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:01.688 [2024-04-18 11:17:09.775074] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:01.688 [2024-04-18 11:17:09.779139] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.779191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.779208] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779217] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.779237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.688 [2024-04-18 11:17:09.779279] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.779299] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.688 [2024-04-18 11:17:09.779729] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.779759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.779770] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779778] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.779792] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.779802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.779808] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779816] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.779840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.779850] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.779864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.779896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.688 [2024-04-18 11:17:09.780299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.780323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.780331] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.780339] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.780368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.780378] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.780397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.780428] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.688 [2024-04-18 11:17:09.780782] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.780805] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.780813] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.780820] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.780842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.780852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.780871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.780901] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.688 [2024-04-18 11:17:09.781236] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.688 [2024-04-18 11:17:09.781258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.688 [2024-04-18 11:17:09.781266] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.781273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.688 [2024-04-18 11:17:09.781313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.781324] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.781339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.781354] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.781365] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.781379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.781393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.781406] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.781419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.781437] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.781445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:26:01.688 [2024-04-18 11:17:09.781457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.688 [2024-04-18 11:17:09.781500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:26:01.688 [2024-04-18 11:17:09.781519] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:26:01.688 [2024-04-18 11:17:09.781528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:26:01.688 [2024-04-18 11:17:09.781540] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:26:01.688 [2024-04-18 11:17:09.782166] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.688 [2024-04-18 11:17:09.782193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.688 [2024-04-18 11:17:09.782203] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.782211] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:26:01.688 [2024-04-18 11:17:09.782220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:26:01.688 [2024-04-18 11:17:09.782233] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.782264] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.782274] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.688 [2024-04-18 11:17:09.782285] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.688 [2024-04-18 11:17:09.782295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.688 [2024-04-18 11:17:09.782305] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782313] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:26:01.689 [2024-04-18 11:17:09.782321] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:26:01.689 [2024-04-18 11:17:09.782328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782345] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782352] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.689 [2024-04-18 11:17:09.782371] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.689 [2024-04-18 11:17:09.782377] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782383] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:26:01.689 [2024-04-18 11:17:09.782391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:26:01.689 [2024-04-18 11:17:09.782398] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782417] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782424] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782434] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:01.689 [2024-04-18 11:17:09.782443] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:01.689 [2024-04-18 11:17:09.782449] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782456] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:26:01.689 [2024-04-18 11:17:09.782463] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:26:01.689 [2024-04-18 11:17:09.782470] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782482] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782489] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.782513] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.782520] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782528] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:26:01.689 [2024-04-18 11:17:09.782559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.782569] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.782575] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782582] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:26:01.689 [2024-04-18 11:17:09.782604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.782614] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.782624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782631] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:26:01.689 [2024-04-18 11:17:09.782644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.782654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.782660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782666] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:26:01.689 ===================================================== 00:26:01.689 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.689 ===================================================== 00:26:01.689 Controller Capabilities/Features 00:26:01.689 ================================ 00:26:01.689 Vendor ID: 8086 00:26:01.689 Subsystem Vendor ID: 8086 00:26:01.689 Serial Number: SPDK00000000000001 00:26:01.689 Model Number: SPDK bdev Controller 00:26:01.689 Firmware Version: 24.05 00:26:01.689 Recommended Arb Burst: 6 00:26:01.689 IEEE OUI Identifier: e4 d2 5c 00:26:01.689 Multi-path I/O 00:26:01.689 May have multiple subsystem ports: Yes 00:26:01.689 May have multiple controllers: Yes 00:26:01.689 Associated with SR-IOV VF: No 00:26:01.689 Max Data Transfer Size: 131072 00:26:01.689 Max Number of Namespaces: 32 00:26:01.689 Max Number of I/O Queues: 127 00:26:01.689 NVMe Specification Version (VS): 1.3 00:26:01.689 NVMe Specification Version (Identify): 1.3 00:26:01.689 Maximum Queue Entries: 128 00:26:01.689 Contiguous Queues Required: Yes 00:26:01.689 Arbitration Mechanisms Supported 00:26:01.689 Weighted Round Robin: Not Supported 00:26:01.689 Vendor Specific: Not Supported 00:26:01.689 Reset Timeout: 15000 ms 00:26:01.689 Doorbell Stride: 4 bytes 00:26:01.689 NVM Subsystem Reset: Not Supported 00:26:01.689 Command Sets Supported 00:26:01.689 NVM Command Set: Supported 00:26:01.689 Boot Partition: Not Supported 00:26:01.689 Memory Page Size Minimum: 4096 bytes 00:26:01.689 Memory Page Size Maximum: 4096 bytes 00:26:01.689 Persistent Memory Region: Not Supported 00:26:01.689 Optional Asynchronous Events Supported 00:26:01.689 Namespace Attribute Notices: Supported 00:26:01.689 Firmware Activation Notices: Not Supported 00:26:01.689 ANA Change Notices: Not Supported 00:26:01.689 PLE Aggregate Log Change Notices: Not Supported 00:26:01.689 LBA Status Info Alert Notices: Not Supported 00:26:01.689 EGE Aggregate Log Change Notices: Not Supported 00:26:01.689 Normal NVM Subsystem Shutdown event: Not Supported 00:26:01.689 Zone Descriptor Change Notices: Not Supported 00:26:01.689 Discovery Log Change Notices: Not Supported 00:26:01.689 Controller Attributes 00:26:01.689 128-bit Host Identifier: Supported 00:26:01.689 Non-Operational Permissive Mode: Not Supported 00:26:01.689 NVM Sets: Not Supported 00:26:01.689 Read Recovery Levels: Not Supported 00:26:01.689 Endurance Groups: Not Supported 00:26:01.689 Predictable Latency Mode: Not Supported 00:26:01.689 Traffic Based Keep ALive: Not Supported 00:26:01.689 Namespace Granularity: Not Supported 00:26:01.689 SQ Associations: Not Supported 00:26:01.689 UUID List: Not Supported 00:26:01.689 Multi-Domain Subsystem: Not Supported 00:26:01.689 Fixed Capacity Management: Not Supported 00:26:01.689 Variable Capacity Management: Not Supported 00:26:01.689 Delete Endurance Group: Not Supported 00:26:01.689 Delete NVM Set: Not Supported 00:26:01.689 Extended LBA Formats Supported: Not Supported 00:26:01.689 Flexible Data Placement Supported: Not Supported 00:26:01.689 00:26:01.689 Controller Memory Buffer Support 00:26:01.689 ================================ 00:26:01.689 Supported: No 00:26:01.689 00:26:01.689 Persistent Memory Region Support 00:26:01.689 ================================ 00:26:01.689 Supported: No 00:26:01.689 00:26:01.689 Admin Command Set Attributes 00:26:01.689 ============================ 00:26:01.689 Security Send/Receive: Not Supported 00:26:01.689 Format NVM: Not Supported 00:26:01.689 Firmware Activate/Download: Not Supported 00:26:01.689 Namespace Management: Not Supported 00:26:01.689 Device Self-Test: Not Supported 00:26:01.689 Directives: Not Supported 00:26:01.689 NVMe-MI: Not Supported 00:26:01.689 Virtualization Management: Not Supported 00:26:01.689 Doorbell Buffer Config: Not Supported 00:26:01.689 Get LBA Status Capability: Not Supported 00:26:01.689 Command & Feature Lockdown Capability: Not Supported 00:26:01.689 Abort Command Limit: 4 00:26:01.689 Async Event Request Limit: 4 00:26:01.689 Number of Firmware Slots: N/A 00:26:01.689 Firmware Slot 1 Read-Only: N/A 00:26:01.689 Firmware Activation Without Reset: N/A 00:26:01.689 Multiple Update Detection Support: N/A 00:26:01.689 Firmware Update Granularity: No Information Provided 00:26:01.689 Per-Namespace SMART Log: No 00:26:01.689 Asymmetric Namespace Access Log Page: Not Supported 00:26:01.689 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:01.689 Command Effects Log Page: Supported 00:26:01.689 Get Log Page Extended Data: Supported 00:26:01.689 Telemetry Log Pages: Not Supported 00:26:01.689 Persistent Event Log Pages: Not Supported 00:26:01.689 Supported Log Pages Log Page: May Support 00:26:01.689 Commands Supported & Effects Log Page: Not Supported 00:26:01.689 Feature Identifiers & Effects Log Page:May Support 00:26:01.689 NVMe-MI Commands & Effects Log Page: May Support 00:26:01.689 Data Area 4 for Telemetry Log: Not Supported 00:26:01.689 Error Log Page Entries Supported: 128 00:26:01.689 Keep Alive: Supported 00:26:01.689 Keep Alive Granularity: 10000 ms 00:26:01.689 00:26:01.689 NVM Command Set Attributes 00:26:01.689 ========================== 00:26:01.689 Submission Queue Entry Size 00:26:01.689 Max: 64 00:26:01.689 Min: 64 00:26:01.689 Completion Queue Entry Size 00:26:01.689 Max: 16 00:26:01.689 Min: 16 00:26:01.689 Number of Namespaces: 32 00:26:01.689 Compare Command: Supported 00:26:01.689 Write Uncorrectable Command: Not Supported 00:26:01.689 Dataset Management Command: Supported 00:26:01.689 Write Zeroes Command: Supported 00:26:01.689 Set Features Save Field: Not Supported 00:26:01.689 Reservations: Supported 00:26:01.689 Timestamp: Not Supported 00:26:01.689 Copy: Supported 00:26:01.689 Volatile Write Cache: Present 00:26:01.689 Atomic Write Unit (Normal): 1 00:26:01.689 Atomic Write Unit (PFail): 1 00:26:01.689 Atomic Compare & Write Unit: 1 00:26:01.689 Fused Compare & Write: Supported 00:26:01.689 Scatter-Gather List 00:26:01.689 SGL Command Set: Supported 00:26:01.689 SGL Keyed: Supported 00:26:01.689 SGL Bit Bucket Descriptor: Not Supported 00:26:01.689 SGL Metadata Pointer: Not Supported 00:26:01.689 Oversized SGL: Not Supported 00:26:01.689 SGL Metadata Address: Not Supported 00:26:01.689 SGL Offset: Supported 00:26:01.689 Transport SGL Data Block: Not Supported 00:26:01.689 Replay Protected Memory Block: Not Supported 00:26:01.689 00:26:01.689 Firmware Slot Information 00:26:01.689 ========================= 00:26:01.689 Active slot: 1 00:26:01.689 Slot 1 Firmware Revision: 24.05 00:26:01.689 00:26:01.689 00:26:01.689 Commands Supported and Effects 00:26:01.689 ============================== 00:26:01.689 Admin Commands 00:26:01.689 -------------- 00:26:01.689 Get Log Page (02h): Supported 00:26:01.689 Identify (06h): Supported 00:26:01.689 Abort (08h): Supported 00:26:01.689 Set Features (09h): Supported 00:26:01.689 Get Features (0Ah): Supported 00:26:01.689 Asynchronous Event Request (0Ch): Supported 00:26:01.689 Keep Alive (18h): Supported 00:26:01.689 I/O Commands 00:26:01.689 ------------ 00:26:01.689 Flush (00h): Supported LBA-Change 00:26:01.689 Write (01h): Supported LBA-Change 00:26:01.689 Read (02h): Supported 00:26:01.689 Compare (05h): Supported 00:26:01.689 Write Zeroes (08h): Supported LBA-Change 00:26:01.689 Dataset Management (09h): Supported LBA-Change 00:26:01.689 Copy (19h): Supported LBA-Change 00:26:01.689 Unknown (79h): Supported LBA-Change 00:26:01.689 Unknown (7Ah): Supported 00:26:01.689 00:26:01.689 Error Log 00:26:01.689 ========= 00:26:01.689 00:26:01.689 Arbitration 00:26:01.689 =========== 00:26:01.689 Arbitration Burst: 1 00:26:01.689 00:26:01.689 Power Management 00:26:01.689 ================ 00:26:01.689 Number of Power States: 1 00:26:01.689 Current Power State: Power State #0 00:26:01.689 Power State #0: 00:26:01.689 Max Power: 0.00 W 00:26:01.689 Non-Operational State: Operational 00:26:01.689 Entry Latency: Not Reported 00:26:01.689 Exit Latency: Not Reported 00:26:01.689 Relative Read Throughput: 0 00:26:01.689 Relative Read Latency: 0 00:26:01.689 Relative Write Throughput: 0 00:26:01.689 Relative Write Latency: 0 00:26:01.689 Idle Power: Not Reported 00:26:01.689 Active Power: Not Reported 00:26:01.689 Non-Operational Permissive Mode: Not Supported 00:26:01.689 00:26:01.689 Health Information 00:26:01.689 ================== 00:26:01.689 Critical Warnings: 00:26:01.689 Available Spare Space: OK 00:26:01.689 Temperature: OK 00:26:01.689 Device Reliability: OK 00:26:01.689 Read Only: No 00:26:01.689 Volatile Memory Backup: OK 00:26:01.689 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:01.689 Temperature Threshold: [2024-04-18 11:17:09.782874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.782888] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:26:01.689 [2024-04-18 11:17:09.782905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.689 [2024-04-18 11:17:09.782941] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:26:01.689 [2024-04-18 11:17:09.783394] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.783418] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.783432] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.783447] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:26:01.689 [2024-04-18 11:17:09.783543] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:01.689 [2024-04-18 11:17:09.783568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.689 [2024-04-18 11:17:09.783598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.689 [2024-04-18 11:17:09.783609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.689 [2024-04-18 11:17:09.783619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.689 [2024-04-18 11:17:09.783636] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.783645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.783653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.689 [2024-04-18 11:17:09.783669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.689 [2024-04-18 11:17:09.783709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.689 [2024-04-18 11:17:09.784189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.784217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.784227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.784235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.689 [2024-04-18 11:17:09.784251] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.784276] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.689 [2024-04-18 11:17:09.784286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.689 [2024-04-18 11:17:09.784302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.689 [2024-04-18 11:17:09.784342] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.689 [2024-04-18 11:17:09.784745] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.689 [2024-04-18 11:17:09.784768] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.689 [2024-04-18 11:17:09.784776] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.784783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.784794] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:01.690 [2024-04-18 11:17:09.784803] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:01.690 [2024-04-18 11:17:09.784831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.784844] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.784852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.784867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.784899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.785272] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.785296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.785304] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785311] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.785332] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.785361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.785391] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.785738] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.785760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.785768] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785775] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.785801] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785811] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.785817] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.785831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.785859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.786222] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.786245] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.786252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.786278] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786287] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786293] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.786311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.786343] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.786656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.786678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.786685] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786692] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.786711] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786719] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.786726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.786743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.786773] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.787065] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.787086] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.787094] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.787101] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.791176] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.791194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.791202] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:26:01.690 [2024-04-18 11:17:09.791219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.690 [2024-04-18 11:17:09.791258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:26:01.690 [2024-04-18 11:17:09.791632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:01.690 [2024-04-18 11:17:09.791656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:01.690 [2024-04-18 11:17:09.791664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:01.690 [2024-04-18 11:17:09.791672] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:26:01.690 [2024-04-18 11:17:09.791688] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:01.690 0 Kelvin (-273 Celsius) 00:26:01.690 Available Spare: 0% 00:26:01.690 Available Spare Threshold: 0% 00:26:01.690 Life Percentage Used: 0% 00:26:01.690 Data Units Read: 0 00:26:01.690 Data Units Written: 0 00:26:01.690 Host Read Commands: 0 00:26:01.690 Host Write Commands: 0 00:26:01.690 Controller Busy Time: 0 minutes 00:26:01.690 Power Cycles: 0 00:26:01.690 Power On Hours: 0 hours 00:26:01.690 Unsafe Shutdowns: 0 00:26:01.690 Unrecoverable Media Errors: 0 00:26:01.690 Lifetime Error Log Entries: 0 00:26:01.690 Warning Temperature Time: 0 minutes 00:26:01.690 Critical Temperature Time: 0 minutes 00:26:01.690 00:26:01.690 Number of Queues 00:26:01.690 ================ 00:26:01.690 Number of I/O Submission Queues: 127 00:26:01.690 Number of I/O Completion Queues: 127 00:26:01.690 00:26:01.690 Active Namespaces 00:26:01.690 ================= 00:26:01.690 Namespace ID:1 00:26:01.690 Error Recovery Timeout: Unlimited 00:26:01.690 Command Set Identifier: NVM (00h) 00:26:01.690 Deallocate: Supported 00:26:01.690 Deallocated/Unwritten Error: Not Supported 00:26:01.690 Deallocated Read Value: Unknown 00:26:01.690 Deallocate in Write Zeroes: Not Supported 00:26:01.690 Deallocated Guard Field: 0xFFFF 00:26:01.690 Flush: Supported 00:26:01.690 Reservation: Supported 00:26:01.690 Namespace Sharing Capabilities: Multiple Controllers 00:26:01.690 Size (in LBAs): 131072 (0GiB) 00:26:01.690 Capacity (in LBAs): 131072 (0GiB) 00:26:01.690 Utilization (in LBAs): 131072 (0GiB) 00:26:01.690 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:01.690 EUI64: ABCDEF0123456789 00:26:01.690 UUID: 6b643b61-1cae-4b50-93c1-22d8e5e5e8cc 00:26:01.690 Thin Provisioning: Not Supported 00:26:01.690 Per-NS Atomic Units: Yes 00:26:01.690 Atomic Boundary Size (Normal): 0 00:26:01.690 Atomic Boundary Size (PFail): 0 00:26:01.690 Atomic Boundary Offset: 0 00:26:01.690 Maximum Single Source Range Length: 65535 00:26:01.690 Maximum Copy Length: 65535 00:26:01.690 Maximum Source Range Count: 1 00:26:01.690 NGUID/EUI64 Never Reused: No 00:26:01.690 Namespace Write Protected: No 00:26:01.690 Number of LBA Formats: 1 00:26:01.690 Current LBA Format: LBA Format #00 00:26:01.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:01.690 00:26:01.690 11:17:09 -- host/identify.sh@51 -- # sync 00:26:01.690 11:17:09 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.690 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.690 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.949 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.949 11:17:09 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:01.949 11:17:09 -- host/identify.sh@56 -- # nvmftestfini 00:26:01.949 11:17:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:01.949 11:17:09 -- nvmf/common.sh@117 -- # sync 00:26:01.949 11:17:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.949 11:17:09 -- nvmf/common.sh@120 -- # set +e 00:26:01.949 11:17:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.949 11:17:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.949 rmmod nvme_tcp 00:26:01.949 rmmod nvme_fabrics 00:26:01.949 rmmod nvme_keyring 00:26:01.949 11:17:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.949 11:17:09 -- nvmf/common.sh@124 -- # set -e 00:26:01.949 11:17:09 -- nvmf/common.sh@125 -- # return 0 00:26:01.949 11:17:09 -- nvmf/common.sh@478 -- # '[' -n 82452 ']' 00:26:01.949 11:17:09 -- nvmf/common.sh@479 -- # killprocess 82452 00:26:01.949 11:17:09 -- common/autotest_common.sh@936 -- # '[' -z 82452 ']' 00:26:01.949 11:17:09 -- common/autotest_common.sh@940 -- # kill -0 82452 00:26:01.949 11:17:09 -- common/autotest_common.sh@941 -- # uname 00:26:01.949 11:17:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:01.949 11:17:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82452 00:26:01.949 11:17:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:01.949 killing process with pid 82452 00:26:01.949 11:17:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:01.949 11:17:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82452' 00:26:01.949 11:17:09 -- common/autotest_common.sh@955 -- # kill 82452 00:26:01.949 [2024-04-18 11:17:09.981080] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:01.949 11:17:09 -- common/autotest_common.sh@960 -- # wait 82452 00:26:03.325 11:17:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:03.325 11:17:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:03.325 11:17:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:03.325 11:17:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.325 11:17:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.325 11:17:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.325 11:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.325 11:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.325 11:17:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:03.325 00:26:03.325 real 0m3.934s 00:26:03.325 user 0m10.651s 00:26:03.325 sys 0m0.918s 00:26:03.325 11:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:03.325 ************************************ 00:26:03.325 END TEST nvmf_identify 00:26:03.325 ************************************ 00:26:03.325 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:26:03.325 11:17:11 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:03.325 11:17:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:03.325 11:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.325 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:26:03.325 ************************************ 00:26:03.325 START TEST nvmf_perf 00:26:03.325 ************************************ 00:26:03.325 11:17:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:03.586 * Looking for test storage... 00:26:03.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:03.586 11:17:11 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:03.586 11:17:11 -- nvmf/common.sh@7 -- # uname -s 00:26:03.586 11:17:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.586 11:17:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.586 11:17:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.586 11:17:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.586 11:17:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.586 11:17:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.586 11:17:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.586 11:17:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.586 11:17:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.586 11:17:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:03.586 11:17:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:03.586 11:17:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.586 11:17:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.586 11:17:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:03.586 11:17:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.586 11:17:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:03.586 11:17:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.586 11:17:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.586 11:17:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.586 11:17:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.586 11:17:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.586 11:17:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.586 11:17:11 -- paths/export.sh@5 -- # export PATH 00:26:03.586 11:17:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.586 11:17:11 -- nvmf/common.sh@47 -- # : 0 00:26:03.586 11:17:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.586 11:17:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.586 11:17:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.586 11:17:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.586 11:17:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.586 11:17:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.586 11:17:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.586 11:17:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.586 11:17:11 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:03.586 11:17:11 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:03.586 11:17:11 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.586 11:17:11 -- host/perf.sh@17 -- # nvmftestinit 00:26:03.586 11:17:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:03.586 11:17:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.586 11:17:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:03.586 11:17:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:03.586 11:17:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:03.586 11:17:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.586 11:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.586 11:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.586 11:17:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:03.586 11:17:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:03.586 11:17:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.586 11:17:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.586 11:17:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:03.586 11:17:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:03.586 11:17:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:03.586 11:17:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:03.586 11:17:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:03.586 11:17:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.586 11:17:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:03.586 11:17:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:03.586 11:17:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:03.586 11:17:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:03.586 11:17:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:03.586 11:17:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:03.586 Cannot find device "nvmf_tgt_br" 00:26:03.586 11:17:11 -- nvmf/common.sh@155 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:03.586 Cannot find device "nvmf_tgt_br2" 00:26:03.586 11:17:11 -- nvmf/common.sh@156 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:03.586 11:17:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:03.586 Cannot find device "nvmf_tgt_br" 00:26:03.586 11:17:11 -- nvmf/common.sh@158 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:03.586 Cannot find device "nvmf_tgt_br2" 00:26:03.586 11:17:11 -- nvmf/common.sh@159 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:03.586 11:17:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:03.586 11:17:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:03.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.586 11:17:11 -- nvmf/common.sh@162 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:03.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:03.586 11:17:11 -- nvmf/common.sh@163 -- # true 00:26:03.586 11:17:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:03.586 11:17:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:03.586 11:17:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:03.586 11:17:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:03.586 11:17:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:03.586 11:17:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:03.586 11:17:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:03.845 11:17:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:03.845 11:17:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:03.845 11:17:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:03.845 11:17:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:03.845 11:17:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:03.845 11:17:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:03.845 11:17:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:03.845 11:17:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:03.845 11:17:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:03.845 11:17:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:03.845 11:17:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:03.845 11:17:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:03.845 11:17:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:03.845 11:17:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:03.845 11:17:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:03.845 11:17:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:03.845 11:17:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:03.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:26:03.845 00:26:03.845 --- 10.0.0.2 ping statistics --- 00:26:03.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.845 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:03.845 11:17:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:03.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:03.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:26:03.845 00:26:03.845 --- 10.0.0.3 ping statistics --- 00:26:03.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.845 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:03.845 11:17:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:03.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:26:03.845 00:26:03.845 --- 10.0.0.1 ping statistics --- 00:26:03.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.845 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:03.845 11:17:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.845 11:17:11 -- nvmf/common.sh@422 -- # return 0 00:26:03.845 11:17:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:03.845 11:17:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.845 11:17:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:03.845 11:17:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:03.845 11:17:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.845 11:17:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:03.845 11:17:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:03.845 11:17:11 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:03.845 11:17:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:03.845 11:17:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:03.845 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:26:03.845 11:17:11 -- nvmf/common.sh@470 -- # nvmfpid=82696 00:26:03.845 11:17:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:03.845 11:17:11 -- nvmf/common.sh@471 -- # waitforlisten 82696 00:26:03.845 11:17:11 -- common/autotest_common.sh@817 -- # '[' -z 82696 ']' 00:26:03.846 11:17:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.846 11:17:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:03.846 11:17:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.846 11:17:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:03.846 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:26:03.846 [2024-04-18 11:17:12.058785] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:03.846 [2024-04-18 11:17:12.058948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.104 [2024-04-18 11:17:12.227938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.362 [2024-04-18 11:17:12.478724] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.362 [2024-04-18 11:17:12.478806] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.362 [2024-04-18 11:17:12.478827] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.362 [2024-04-18 11:17:12.478842] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.362 [2024-04-18 11:17:12.478857] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.362 [2024-04-18 11:17:12.479062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.362 [2024-04-18 11:17:12.479719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.362 [2024-04-18 11:17:12.479905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.362 [2024-04-18 11:17:12.479926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.929 11:17:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.929 11:17:13 -- common/autotest_common.sh@850 -- # return 0 00:26:04.929 11:17:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:04.929 11:17:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:04.929 11:17:13 -- common/autotest_common.sh@10 -- # set +x 00:26:04.929 11:17:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.929 11:17:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:04.929 11:17:13 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:05.495 11:17:13 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:05.495 11:17:13 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:05.753 11:17:13 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:05.753 11:17:13 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:06.012 11:17:14 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:06.012 11:17:14 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:06.012 11:17:14 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:06.012 11:17:14 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:06.012 11:17:14 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:06.270 [2024-04-18 11:17:14.274298] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.270 11:17:14 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.528 11:17:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:06.528 11:17:14 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.786 11:17:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:06.786 11:17:14 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:07.044 11:17:15 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.303 [2024-04-18 11:17:15.281032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.303 11:17:15 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:07.561 11:17:15 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:07.561 11:17:15 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:07.561 11:17:15 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:07.561 11:17:15 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:08.938 Initializing NVMe Controllers 00:26:08.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:08.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:08.938 Initialization complete. Launching workers. 00:26:08.938 ======================================================== 00:26:08.938 Latency(us) 00:26:08.938 Device Information : IOPS MiB/s Average min max 00:26:08.938 PCIE (0000:00:10.0) NSID 1 from core 0: 22912.00 89.50 1396.11 355.99 7610.58 00:26:08.938 ======================================================== 00:26:08.938 Total : 22912.00 89.50 1396.11 355.99 7610.58 00:26:08.938 00:26:08.938 11:17:16 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.314 Initializing NVMe Controllers 00:26:10.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:10.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:10.314 Initialization complete. Launching workers. 00:26:10.314 ======================================================== 00:26:10.314 Latency(us) 00:26:10.314 Device Information : IOPS MiB/s Average min max 00:26:10.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2604.83 10.18 381.79 158.86 5159.84 00:26:10.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.76 0.48 8145.84 4958.06 12035.19 00:26:10.314 ======================================================== 00:26:10.314 Total : 2727.59 10.65 731.21 158.86 12035.19 00:26:10.314 00:26:10.314 11:17:18 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.692 Initializing NVMe Controllers 00:26:11.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:11.692 Initialization complete. Launching workers. 00:26:11.692 ======================================================== 00:26:11.692 Latency(us) 00:26:11.692 Device Information : IOPS MiB/s Average min max 00:26:11.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6791.99 26.53 4715.88 1107.37 10990.02 00:26:11.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2648.99 10.35 12174.41 5868.46 28158.18 00:26:11.692 ======================================================== 00:26:11.692 Total : 9440.98 36.88 6808.63 1107.37 28158.18 00:26:11.692 00:26:11.692 11:17:19 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:11.692 11:17:19 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.978 Initializing NVMe Controllers 00:26:14.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.978 Controller IO queue size 128, less than required. 00:26:14.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.978 Controller IO queue size 128, less than required. 00:26:14.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.978 Initialization complete. Launching workers. 00:26:14.978 ======================================================== 00:26:14.978 Latency(us) 00:26:14.978 Device Information : IOPS MiB/s Average min max 00:26:14.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 963.11 240.78 141017.54 76526.05 338697.37 00:26:14.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.67 143.17 238093.07 118477.97 490676.29 00:26:14.978 ======================================================== 00:26:14.978 Total : 1535.79 383.95 177215.73 76526.05 490676.29 00:26:14.978 00:26:14.978 11:17:22 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:14.978 No valid NVMe controllers or AIO or URING devices found 00:26:14.978 Initializing NVMe Controllers 00:26:14.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.978 Controller IO queue size 128, less than required. 00:26:14.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.978 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:14.978 Controller IO queue size 128, less than required. 00:26:14.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.978 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:14.978 WARNING: Some requested NVMe devices were skipped 00:26:14.978 11:17:22 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:18.267 Initializing NVMe Controllers 00:26:18.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.267 Controller IO queue size 128, less than required. 00:26:18.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:18.267 Controller IO queue size 128, less than required. 00:26:18.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:18.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:18.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:18.267 Initialization complete. Launching workers. 00:26:18.267 00:26:18.267 ==================== 00:26:18.267 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:18.267 TCP transport: 00:26:18.267 polls: 6697 00:26:18.267 idle_polls: 4150 00:26:18.267 sock_completions: 2547 00:26:18.267 nvme_completions: 4069 00:26:18.267 submitted_requests: 6178 00:26:18.267 queued_requests: 1 00:26:18.267 00:26:18.267 ==================== 00:26:18.267 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:18.267 TCP transport: 00:26:18.267 polls: 7636 00:26:18.267 idle_polls: 5036 00:26:18.267 sock_completions: 2600 00:26:18.267 nvme_completions: 5007 00:26:18.267 submitted_requests: 7434 00:26:18.267 queued_requests: 1 00:26:18.267 ======================================================== 00:26:18.267 Latency(us) 00:26:18.267 Device Information : IOPS MiB/s Average min max 00:26:18.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1016.91 254.23 135116.96 86441.30 414729.52 00:26:18.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1251.39 312.85 102938.59 52443.67 301711.23 00:26:18.267 ======================================================== 00:26:18.267 Total : 2268.30 567.07 117364.60 52443.67 414729.52 00:26:18.267 00:26:18.267 11:17:25 -- host/perf.sh@66 -- # sync 00:26:18.267 11:17:25 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.267 11:17:26 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:18.267 11:17:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:18.267 11:17:26 -- host/perf.sh@114 -- # nvmftestfini 00:26:18.267 11:17:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:18.267 11:17:26 -- nvmf/common.sh@117 -- # sync 00:26:18.267 11:17:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.267 11:17:26 -- nvmf/common.sh@120 -- # set +e 00:26:18.267 11:17:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.267 11:17:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.267 rmmod nvme_tcp 00:26:18.267 rmmod nvme_fabrics 00:26:18.267 rmmod nvme_keyring 00:26:18.267 11:17:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.267 11:17:26 -- nvmf/common.sh@124 -- # set -e 00:26:18.267 11:17:26 -- nvmf/common.sh@125 -- # return 0 00:26:18.267 11:17:26 -- nvmf/common.sh@478 -- # '[' -n 82696 ']' 00:26:18.267 11:17:26 -- nvmf/common.sh@479 -- # killprocess 82696 00:26:18.267 11:17:26 -- common/autotest_common.sh@936 -- # '[' -z 82696 ']' 00:26:18.267 11:17:26 -- common/autotest_common.sh@940 -- # kill -0 82696 00:26:18.267 11:17:26 -- common/autotest_common.sh@941 -- # uname 00:26:18.267 11:17:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.267 11:17:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82696 00:26:18.268 killing process with pid 82696 00:26:18.268 11:17:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:18.268 11:17:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:18.268 11:17:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82696' 00:26:18.268 11:17:26 -- common/autotest_common.sh@955 -- # kill 82696 00:26:18.268 11:17:26 -- common/autotest_common.sh@960 -- # wait 82696 00:26:19.643 11:17:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:19.643 11:17:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:19.643 11:17:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:19.643 11:17:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.643 11:17:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.643 11:17:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.643 11:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.643 11:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.903 11:17:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:19.903 00:26:19.903 real 0m16.421s 00:26:19.903 user 0m59.767s 00:26:19.903 sys 0m3.782s 00:26:19.903 11:17:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:19.903 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:19.903 ************************************ 00:26:19.903 END TEST nvmf_perf 00:26:19.903 ************************************ 00:26:19.903 11:17:27 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:19.903 11:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:19.903 11:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.903 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:19.903 ************************************ 00:26:19.903 START TEST nvmf_fio_host 00:26:19.903 ************************************ 00:26:19.903 11:17:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:19.903 * Looking for test storage... 00:26:19.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:19.903 11:17:28 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:19.903 11:17:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.903 11:17:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.903 11:17:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.903 11:17:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@5 -- # export PATH 00:26:19.903 11:17:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:19.903 11:17:28 -- nvmf/common.sh@7 -- # uname -s 00:26:19.903 11:17:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.903 11:17:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.903 11:17:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.903 11:17:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.903 11:17:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.903 11:17:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.903 11:17:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.903 11:17:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.903 11:17:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.903 11:17:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.903 11:17:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:19.903 11:17:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:19.903 11:17:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.903 11:17:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.903 11:17:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:19.903 11:17:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.903 11:17:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:19.903 11:17:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.903 11:17:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.903 11:17:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.903 11:17:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- paths/export.sh@5 -- # export PATH 00:26:19.903 11:17:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.903 11:17:28 -- nvmf/common.sh@47 -- # : 0 00:26:19.903 11:17:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.903 11:17:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.903 11:17:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.903 11:17:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.903 11:17:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.903 11:17:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.903 11:17:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.903 11:17:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.903 11:17:28 -- host/fio.sh@12 -- # nvmftestinit 00:26:19.903 11:17:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:19.903 11:17:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.903 11:17:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:19.903 11:17:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:19.903 11:17:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:19.903 11:17:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.903 11:17:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.903 11:17:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.162 11:17:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:20.162 11:17:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:20.162 11:17:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:20.162 11:17:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:20.162 11:17:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:20.162 11:17:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:20.162 11:17:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.162 11:17:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.162 11:17:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:20.162 11:17:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:20.162 11:17:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.162 11:17:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.162 11:17:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.162 11:17:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.162 11:17:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.162 11:17:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.162 11:17:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.162 11:17:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.162 11:17:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:20.162 11:17:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:20.162 Cannot find device "nvmf_tgt_br" 00:26:20.162 11:17:28 -- nvmf/common.sh@155 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.162 Cannot find device "nvmf_tgt_br2" 00:26:20.162 11:17:28 -- nvmf/common.sh@156 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:20.162 11:17:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:20.162 Cannot find device "nvmf_tgt_br" 00:26:20.162 11:17:28 -- nvmf/common.sh@158 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:20.162 Cannot find device "nvmf_tgt_br2" 00:26:20.162 11:17:28 -- nvmf/common.sh@159 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:20.162 11:17:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:20.162 11:17:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.162 11:17:28 -- nvmf/common.sh@162 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.162 11:17:28 -- nvmf/common.sh@163 -- # true 00:26:20.162 11:17:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.162 11:17:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.162 11:17:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.162 11:17:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.162 11:17:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.162 11:17:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.162 11:17:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.162 11:17:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:20.162 11:17:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:20.162 11:17:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:20.162 11:17:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:20.162 11:17:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:20.162 11:17:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:20.162 11:17:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.162 11:17:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.162 11:17:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.162 11:17:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:20.162 11:17:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:20.162 11:17:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.162 11:17:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.421 11:17:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.421 11:17:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:20.421 11:17:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:20.421 11:17:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:20.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:20.421 00:26:20.421 --- 10.0.0.2 ping statistics --- 00:26:20.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.421 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:20.421 11:17:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:20.421 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:20.421 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:26:20.421 00:26:20.421 --- 10.0.0.3 ping statistics --- 00:26:20.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.421 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:20.421 11:17:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:20.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:26:20.421 00:26:20.421 --- 10.0.0.1 ping statistics --- 00:26:20.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.421 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:20.421 11:17:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.421 11:17:28 -- nvmf/common.sh@422 -- # return 0 00:26:20.421 11:17:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:20.421 11:17:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.421 11:17:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:20.421 11:17:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:20.421 11:17:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.422 11:17:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:20.422 11:17:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:20.422 11:17:28 -- host/fio.sh@14 -- # [[ y != y ]] 00:26:20.422 11:17:28 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:20.422 11:17:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:20.422 11:17:28 -- common/autotest_common.sh@10 -- # set +x 00:26:20.422 11:17:28 -- host/fio.sh@22 -- # nvmfpid=83212 00:26:20.422 11:17:28 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:20.422 11:17:28 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.422 11:17:28 -- host/fio.sh@26 -- # waitforlisten 83212 00:26:20.422 11:17:28 -- common/autotest_common.sh@817 -- # '[' -z 83212 ']' 00:26:20.422 11:17:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.422 11:17:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.422 11:17:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.422 11:17:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.422 11:17:28 -- common/autotest_common.sh@10 -- # set +x 00:26:20.422 [2024-04-18 11:17:28.581046] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:20.422 [2024-04-18 11:17:28.581818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.681 [2024-04-18 11:17:28.763695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.940 [2024-04-18 11:17:29.089641] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.940 [2024-04-18 11:17:29.089697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.940 [2024-04-18 11:17:29.089734] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.940 [2024-04-18 11:17:29.089748] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.940 [2024-04-18 11:17:29.089762] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.940 [2024-04-18 11:17:29.089946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.940 [2024-04-18 11:17:29.090358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.940 [2024-04-18 11:17:29.090378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.940 [2024-04-18 11:17:29.090271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.506 11:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:21.506 11:17:29 -- common/autotest_common.sh@850 -- # return 0 00:26:21.506 11:17:29 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.506 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.506 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.506 [2024-04-18 11:17:29.569728] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.506 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.506 11:17:29 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:26:21.506 11:17:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:21.506 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.506 11:17:29 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:21.506 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.506 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.506 Malloc1 00:26:21.506 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.506 11:17:29 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:21.506 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.506 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.506 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.506 11:17:29 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:21.506 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.506 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.764 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.764 11:17:29 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.764 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.764 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.764 [2024-04-18 11:17:29.734494] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.764 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.764 11:17:29 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:21.764 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.764 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.764 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.764 11:17:29 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:21.764 11:17:29 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:21.765 11:17:29 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:21.765 11:17:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:21.765 11:17:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.765 11:17:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:21.765 11:17:29 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:21.765 11:17:29 -- common/autotest_common.sh@1327 -- # shift 00:26:21.765 11:17:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:21.765 11:17:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.765 11:17:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:21.765 11:17:29 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:21.765 11:17:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:21.765 11:17:29 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:21.765 11:17:29 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:21.765 11:17:29 -- common/autotest_common.sh@1333 -- # break 00:26:21.765 11:17:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:21.765 11:17:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:21.765 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:21.765 fio-3.35 00:26:21.765 Starting 1 thread 00:26:24.304 00:26:24.304 test: (groupid=0, jobs=1): err= 0: pid=83283: Thu Apr 18 11:17:32 2024 00:26:24.304 read: IOPS=6824, BW=26.7MiB/s (28.0MB/s)(53.6MiB/2009msec) 00:26:24.304 slat (usec): min=2, max=347, avg= 3.28, stdev= 3.82 00:26:24.304 clat (usec): min=3418, max=16991, avg=9776.95, stdev=662.28 00:26:24.304 lat (usec): min=3467, max=16994, avg=9780.24, stdev=661.90 00:26:24.304 clat percentiles (usec): 00:26:24.304 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9241], 00:26:24.304 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:26:24.304 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:26:24.304 | 99.00th=[11338], 99.50th=[11600], 99.90th=[13960], 99.95th=[15008], 00:26:24.304 | 99.99th=[16909] 00:26:24.304 bw ( KiB/s): min=26304, max=27928, per=100.00%, avg=27308.00, stdev=728.73, samples=4 00:26:24.304 iops : min= 6576, max= 6982, avg=6827.00, stdev=182.18, samples=4 00:26:24.304 write: IOPS=6830, BW=26.7MiB/s (28.0MB/s)(53.6MiB/2009msec); 0 zone resets 00:26:24.304 slat (usec): min=2, max=252, avg= 3.42, stdev= 2.54 00:26:24.304 clat (usec): min=2711, max=16904, avg=8879.61, stdev=655.27 00:26:24.304 lat (usec): min=2737, max=16907, avg=8883.03, stdev=654.99 00:26:24.304 clat percentiles (usec): 00:26:24.304 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8455], 00:26:24.304 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:26:24.304 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9634], 00:26:24.304 | 99.00th=[10159], 99.50th=[10290], 99.90th=[15664], 99.95th=[15926], 00:26:24.304 | 99.99th=[16909] 00:26:24.304 bw ( KiB/s): min=26952, max=27480, per=99.93%, avg=27304.00, stdev=238.31, samples=4 00:26:24.304 iops : min= 6738, max= 6870, avg=6826.00, stdev=59.58, samples=4 00:26:24.304 lat (msec) : 4=0.08%, 10=82.35%, 20=17.57% 00:26:24.304 cpu : usr=71.71%, sys=20.52%, ctx=16, majf=0, minf=1536 00:26:24.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:24.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.304 issued rwts: total=13710,13723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.304 00:26:24.304 Run status group 0 (all jobs): 00:26:24.304 READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=53.6MiB (56.2MB), run=2009-2009msec 00:26:24.304 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=53.6MiB (56.2MB), run=2009-2009msec 00:26:24.564 ----------------------------------------------------- 00:26:24.564 Suppressions used: 00:26:24.564 count bytes template 00:26:24.564 1 57 /usr/src/fio/parse.c 00:26:24.564 1 8 libtcmalloc_minimal.so 00:26:24.564 ----------------------------------------------------- 00:26:24.564 00:26:24.564 11:17:32 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:24.564 11:17:32 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:24.564 11:17:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:24.564 11:17:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.564 11:17:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:24.564 11:17:32 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:24.564 11:17:32 -- common/autotest_common.sh@1327 -- # shift 00:26:24.564 11:17:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:24.564 11:17:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.564 11:17:32 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:24.564 11:17:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:24.564 11:17:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:24.564 11:17:32 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:24.564 11:17:32 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:24.564 11:17:32 -- common/autotest_common.sh@1333 -- # break 00:26:24.564 11:17:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:24.564 11:17:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:24.564 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:24.564 fio-3.35 00:26:24.564 Starting 1 thread 00:26:27.132 00:26:27.132 test: (groupid=0, jobs=1): err= 0: pid=83324: Thu Apr 18 11:17:35 2024 00:26:27.132 read: IOPS=6254, BW=97.7MiB/s (102MB/s)(196MiB/2007msec) 00:26:27.132 slat (usec): min=3, max=149, avg= 4.99, stdev= 2.38 00:26:27.132 clat (usec): min=3367, max=25023, avg=12092.26, stdev=3065.68 00:26:27.132 lat (usec): min=3372, max=25029, avg=12097.25, stdev=3065.92 00:26:27.132 clat percentiles (usec): 00:26:27.132 | 1.00th=[ 6259], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9372], 00:26:27.132 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11863], 60.00th=[12780], 00:26:27.132 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15926], 95.00th=[17433], 00:26:27.132 | 99.00th=[21365], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:26:27.132 | 99.99th=[25035] 00:26:27.132 bw ( KiB/s): min=42080, max=59200, per=49.31%, avg=49344.00, stdev=8300.24, samples=4 00:26:27.132 iops : min= 2630, max= 3700, avg=3084.00, stdev=518.77, samples=4 00:26:27.132 write: IOPS=3602, BW=56.3MiB/s (59.0MB/s)(101MiB/1798msec); 0 zone resets 00:26:27.132 slat (usec): min=37, max=949, avg=41.67, stdev=14.04 00:26:27.132 clat (usec): min=8854, max=29739, avg=15110.45, stdev=3075.38 00:26:27.132 lat (usec): min=8893, max=29777, avg=15152.12, stdev=3077.52 00:26:27.132 clat percentiles (usec): 00:26:27.132 | 1.00th=[10290], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:26:27.132 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14484], 60.00th=[15139], 00:26:27.132 | 70.00th=[16057], 80.00th=[17433], 90.00th=[19530], 95.00th=[20579], 00:26:27.132 | 99.00th=[24773], 99.50th=[25822], 99.90th=[28181], 99.95th=[28181], 00:26:27.132 | 99.99th=[29754] 00:26:27.132 bw ( KiB/s): min=44704, max=60960, per=89.08%, avg=51344.00, stdev=7794.90, samples=4 00:26:27.132 iops : min= 2794, max= 3810, avg=3209.00, stdev=487.18, samples=4 00:26:27.132 lat (msec) : 4=0.06%, 10=17.30%, 20=78.99%, 50=3.66% 00:26:27.132 cpu : usr=74.69%, sys=16.89%, ctx=39, majf=0, minf=2078 00:26:27.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:27.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:27.132 issued rwts: total=12553,6477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:27.132 00:26:27.132 Run status group 0 (all jobs): 00:26:27.132 READ: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=196MiB (206MB), run=2007-2007msec 00:26:27.132 WRITE: bw=56.3MiB/s (59.0MB/s), 56.3MiB/s-56.3MiB/s (59.0MB/s-59.0MB/s), io=101MiB (106MB), run=1798-1798msec 00:26:27.132 ----------------------------------------------------- 00:26:27.132 Suppressions used: 00:26:27.132 count bytes template 00:26:27.132 1 57 /usr/src/fio/parse.c 00:26:27.132 128 12288 /usr/src/fio/iolog.c 00:26:27.132 1 8 libtcmalloc_minimal.so 00:26:27.132 ----------------------------------------------------- 00:26:27.132 00:26:27.390 11:17:35 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.390 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.390 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:27.390 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.390 11:17:35 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:26:27.390 11:17:35 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:26:27.390 11:17:35 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:26:27.390 11:17:35 -- host/fio.sh@84 -- # nvmftestfini 00:26:27.390 11:17:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:27.390 11:17:35 -- nvmf/common.sh@117 -- # sync 00:26:27.390 11:17:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:27.390 11:17:35 -- nvmf/common.sh@120 -- # set +e 00:26:27.390 11:17:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:27.390 11:17:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:27.390 rmmod nvme_tcp 00:26:27.390 rmmod nvme_fabrics 00:26:27.390 rmmod nvme_keyring 00:26:27.390 11:17:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:27.390 11:17:35 -- nvmf/common.sh@124 -- # set -e 00:26:27.390 11:17:35 -- nvmf/common.sh@125 -- # return 0 00:26:27.390 11:17:35 -- nvmf/common.sh@478 -- # '[' -n 83212 ']' 00:26:27.390 11:17:35 -- nvmf/common.sh@479 -- # killprocess 83212 00:26:27.390 11:17:35 -- common/autotest_common.sh@936 -- # '[' -z 83212 ']' 00:26:27.390 11:17:35 -- common/autotest_common.sh@940 -- # kill -0 83212 00:26:27.390 11:17:35 -- common/autotest_common.sh@941 -- # uname 00:26:27.390 11:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.390 11:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83212 00:26:27.390 11:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:27.390 killing process with pid 83212 00:26:27.390 11:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:27.390 11:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83212' 00:26:27.390 11:17:35 -- common/autotest_common.sh@955 -- # kill 83212 00:26:27.390 11:17:35 -- common/autotest_common.sh@960 -- # wait 83212 00:26:28.805 11:17:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:28.805 11:17:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:28.805 11:17:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:28.805 11:17:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.805 11:17:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.805 11:17:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.805 11:17:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.805 11:17:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.805 11:17:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:28.805 00:26:28.805 real 0m8.862s 00:26:28.805 user 0m33.332s 00:26:28.805 sys 0m2.172s 00:26:28.805 11:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.805 11:17:36 -- common/autotest_common.sh@10 -- # set +x 00:26:28.805 ************************************ 00:26:28.805 END TEST nvmf_fio_host 00:26:28.805 ************************************ 00:26:28.805 11:17:36 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:28.805 11:17:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:28.805 11:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.805 11:17:36 -- common/autotest_common.sh@10 -- # set +x 00:26:28.805 ************************************ 00:26:28.805 START TEST nvmf_failover 00:26:28.805 ************************************ 00:26:28.805 11:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:29.065 * Looking for test storage... 00:26:29.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:29.065 11:17:37 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:29.065 11:17:37 -- nvmf/common.sh@7 -- # uname -s 00:26:29.065 11:17:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.065 11:17:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.065 11:17:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.065 11:17:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.065 11:17:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.065 11:17:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.065 11:17:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.065 11:17:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.065 11:17:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.065 11:17:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.065 11:17:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:29.065 11:17:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:26:29.065 11:17:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.065 11:17:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.065 11:17:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:29.065 11:17:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.065 11:17:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:29.065 11:17:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.065 11:17:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.065 11:17:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.065 11:17:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.065 11:17:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.065 11:17:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.065 11:17:37 -- paths/export.sh@5 -- # export PATH 00:26:29.065 11:17:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.065 11:17:37 -- nvmf/common.sh@47 -- # : 0 00:26:29.065 11:17:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.065 11:17:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.065 11:17:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.065 11:17:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.065 11:17:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.065 11:17:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.065 11:17:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.065 11:17:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.065 11:17:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.065 11:17:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.065 11:17:37 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.065 11:17:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:29.065 11:17:37 -- host/failover.sh@18 -- # nvmftestinit 00:26:29.065 11:17:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:29.065 11:17:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.065 11:17:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:29.065 11:17:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:29.065 11:17:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:29.065 11:17:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.065 11:17:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.065 11:17:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.065 11:17:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:29.065 11:17:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:29.065 11:17:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:29.065 11:17:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:29.066 11:17:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:29.066 11:17:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:29.066 11:17:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.066 11:17:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.066 11:17:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:29.066 11:17:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:29.066 11:17:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:29.066 11:17:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:29.066 11:17:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:29.066 11:17:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.066 11:17:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:29.066 11:17:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:29.066 11:17:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:29.066 11:17:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:29.066 11:17:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:29.066 11:17:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:29.066 Cannot find device "nvmf_tgt_br" 00:26:29.066 11:17:37 -- nvmf/common.sh@155 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:29.066 Cannot find device "nvmf_tgt_br2" 00:26:29.066 11:17:37 -- nvmf/common.sh@156 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:29.066 11:17:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:29.066 Cannot find device "nvmf_tgt_br" 00:26:29.066 11:17:37 -- nvmf/common.sh@158 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:29.066 Cannot find device "nvmf_tgt_br2" 00:26:29.066 11:17:37 -- nvmf/common.sh@159 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:29.066 11:17:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:29.066 11:17:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:29.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:29.066 11:17:37 -- nvmf/common.sh@162 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:29.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:29.066 11:17:37 -- nvmf/common.sh@163 -- # true 00:26:29.066 11:17:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:29.066 11:17:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:29.066 11:17:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:29.066 11:17:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:29.325 11:17:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:29.325 11:17:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:29.325 11:17:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:29.325 11:17:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:29.325 11:17:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:29.325 11:17:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:29.325 11:17:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:29.325 11:17:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:29.325 11:17:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:29.325 11:17:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:29.325 11:17:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:29.325 11:17:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:29.325 11:17:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:29.325 11:17:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:29.325 11:17:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:29.325 11:17:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:29.325 11:17:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:29.325 11:17:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:29.325 11:17:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:29.325 11:17:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:29.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:29.325 00:26:29.325 --- 10.0.0.2 ping statistics --- 00:26:29.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.325 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:29.325 11:17:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:29.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:29.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:26:29.325 00:26:29.325 --- 10.0.0.3 ping statistics --- 00:26:29.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.325 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:29.325 11:17:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:29.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:26:29.325 00:26:29.325 --- 10.0.0.1 ping statistics --- 00:26:29.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.325 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:29.325 11:17:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.325 11:17:37 -- nvmf/common.sh@422 -- # return 0 00:26:29.325 11:17:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:29.325 11:17:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.325 11:17:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:29.325 11:17:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:29.325 11:17:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.325 11:17:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:29.325 11:17:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:29.325 11:17:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:29.325 11:17:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:29.325 11:17:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:29.325 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:26:29.325 11:17:37 -- nvmf/common.sh@470 -- # nvmfpid=83552 00:26:29.325 11:17:37 -- nvmf/common.sh@471 -- # waitforlisten 83552 00:26:29.325 11:17:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:29.325 11:17:37 -- common/autotest_common.sh@817 -- # '[' -z 83552 ']' 00:26:29.325 11:17:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.325 11:17:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:29.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.325 11:17:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.325 11:17:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:29.325 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:26:29.584 [2024-04-18 11:17:37.648170] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:29.584 [2024-04-18 11:17:37.648346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.843 [2024-04-18 11:17:37.824404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:30.102 [2024-04-18 11:17:38.086974] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.102 [2024-04-18 11:17:38.087063] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.102 [2024-04-18 11:17:38.087083] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.102 [2024-04-18 11:17:38.087152] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.102 [2024-04-18 11:17:38.087169] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.102 [2024-04-18 11:17:38.087396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.102 [2024-04-18 11:17:38.087505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.102 [2024-04-18 11:17:38.087529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.669 11:17:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:30.669 11:17:38 -- common/autotest_common.sh@850 -- # return 0 00:26:30.669 11:17:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:30.669 11:17:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:30.669 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:30.669 11:17:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.669 11:17:38 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:30.964 [2024-04-18 11:17:38.905523] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.964 11:17:38 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:31.222 Malloc0 00:26:31.222 11:17:39 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.479 11:17:39 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:31.737 11:17:39 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.996 [2024-04-18 11:17:40.137165] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.996 11:17:40 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:32.255 [2024-04-18 11:17:40.421376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:32.255 11:17:40 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:32.513 [2024-04-18 11:17:40.657639] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:32.513 11:17:40 -- host/failover.sh@31 -- # bdevperf_pid=83669 00:26:32.513 11:17:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.513 11:17:40 -- host/failover.sh@34 -- # waitforlisten 83669 /var/tmp/bdevperf.sock 00:26:32.513 11:17:40 -- common/autotest_common.sh@817 -- # '[' -z 83669 ']' 00:26:32.513 11:17:40 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:32.513 11:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.513 11:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:32.513 11:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.513 11:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:32.513 11:17:40 -- common/autotest_common.sh@10 -- # set +x 00:26:33.448 11:17:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:33.448 11:17:41 -- common/autotest_common.sh@850 -- # return 0 00:26:33.448 11:17:41 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.014 NVMe0n1 00:26:34.014 11:17:42 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.272 00:26:34.272 11:17:42 -- host/failover.sh@39 -- # run_test_pid=83717 00:26:34.272 11:17:42 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:34.272 11:17:42 -- host/failover.sh@41 -- # sleep 1 00:26:35.205 11:17:43 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.463 [2024-04-18 11:17:43.623037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 [2024-04-18 11:17:43.623267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:26:35.463 11:17:43 -- host/failover.sh@45 -- # sleep 3 00:26:38.744 11:17:46 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:39.013 00:26:39.013 11:17:47 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.282 [2024-04-18 11:17:47.310813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.310999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 [2024-04-18 11:17:47.311256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:39.282 11:17:47 -- host/failover.sh@50 -- # sleep 3 00:26:42.565 11:17:50 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.565 [2024-04-18 11:17:50.568016] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.565 11:17:50 -- host/failover.sh@55 -- # sleep 1 00:26:43.502 11:17:51 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:43.761 [2024-04-18 11:17:51.821526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 [2024-04-18 11:17:51.821946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:43.761 11:17:51 -- host/failover.sh@59 -- # wait 83717 00:26:50.337 0 00:26:50.337 11:17:57 -- host/failover.sh@61 -- # killprocess 83669 00:26:50.337 11:17:57 -- common/autotest_common.sh@936 -- # '[' -z 83669 ']' 00:26:50.337 11:17:57 -- common/autotest_common.sh@940 -- # kill -0 83669 00:26:50.337 11:17:57 -- common/autotest_common.sh@941 -- # uname 00:26:50.337 11:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:50.337 11:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83669 00:26:50.337 11:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:50.337 11:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:50.337 killing process with pid 83669 00:26:50.337 11:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83669' 00:26:50.337 11:17:57 -- common/autotest_common.sh@955 -- # kill 83669 00:26:50.337 11:17:57 -- common/autotest_common.sh@960 -- # wait 83669 00:26:50.604 11:17:58 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:50.604 [2024-04-18 11:17:40.764603] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:50.604 [2024-04-18 11:17:40.764779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83669 ] 00:26:50.604 [2024-04-18 11:17:40.930862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.604 [2024-04-18 11:17:41.219254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.604 Running I/O for 15 seconds... 00:26:50.604 [2024-04-18 11:17:43.624364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.604 [2024-04-18 11:17:43.624498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.604 [2024-04-18 11:17:43.624545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.604 [2024-04-18 11:17:43.624568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.604 [2024-04-18 11:17:43.624591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.604 [2024-04-18 11:17:43.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.604 [2024-04-18 11:17:43.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.604 [2024-04-18 11:17:43.624664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.604 [2024-04-18 11:17:43.624685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.605 [2024-04-18 11:17:43.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.605 [2024-04-18 11:17:43.624744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.605 [2024-04-18 11:17:43.624785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.624869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.624909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.624931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.624950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.625982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.605 [2024-04-18 11:17:43.626292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.605 [2024-04-18 11:17:43.626310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.626980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.626999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.627039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.627079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.627135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.627184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.606 [2024-04-18 11:17:43.627226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.606 [2024-04-18 11:17:43.627741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.606 [2024-04-18 11:17:43.627760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.627841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.627881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.607 [2024-04-18 11:17:43.627920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.627968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.627990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.628967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.628986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.607 [2024-04-18 11:17:43.629435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.607 [2024-04-18 11:17:43.629455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:43.629928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.629947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:26:50.608 [2024-04-18 11:17:43.629972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:50.608 [2024-04-18 11:17:43.629989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:50.608 [2024-04-18 11:17:43.630020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60424 len:8 PRP1 0x0 PRP2 0x0 00:26:50.608 [2024-04-18 11:17:43.630040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.630323] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:26:50.608 [2024-04-18 11:17:43.630352] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:50.608 [2024-04-18 11:17:43.630426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.608 [2024-04-18 11:17:43.630454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.630477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.608 [2024-04-18 11:17:43.630495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.630513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.608 [2024-04-18 11:17:43.630531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.630550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.608 [2024-04-18 11:17:43.630568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:43.630585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.608 [2024-04-18 11:17:43.630687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:50.608 [2024-04-18 11:17:43.634753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.608 [2024-04-18 11:17:43.674289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:50.608 [2024-04-18 11:17:47.311950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.608 [2024-04-18 11:17:47.312680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:47.312722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:47.312762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.608 [2024-04-18 11:17:47.312803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.608 [2024-04-18 11:17:47.312824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.312843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.312864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.312903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.312922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.312943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.312962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.312982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.609 [2024-04-18 11:17:47.313301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.609 [2024-04-18 11:17:47.313854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.609 [2024-04-18 11:17:47.313874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.313892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.313913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.313932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.313957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.313975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.313996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.314980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.314999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.610 [2024-04-18 11:17:47.315433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.610 [2024-04-18 11:17:47.315452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.315699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.315973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.315991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.611 [2024-04-18 11:17:47.316447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.316974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.316995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.317013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.611 [2024-04-18 11:17:47.317034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.611 [2024-04-18 11:17:47.317053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:47.317381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:50.612 [2024-04-18 11:17:47.317444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:50.612 [2024-04-18 11:17:47.317461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5880 len:8 PRP1 0x0 PRP2 0x0 00:26:50.612 [2024-04-18 11:17:47.317490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317755] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:26:50.612 [2024-04-18 11:17:47.317782] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:50.612 [2024-04-18 11:17:47.317850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:47.317878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:47.317917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:47.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.317972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:47.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:47.318007] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.612 [2024-04-18 11:17:47.318079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:50.612 [2024-04-18 11:17:47.322032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.612 [2024-04-18 11:17:47.372966] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:50.612 [2024-04-18 11:17:51.820080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:51.820187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.820219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:51.820239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.820259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:51.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.820296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.612 [2024-04-18 11:17:51.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.820345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:26:50.612 [2024-04-18 11:17:51.822061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.612 [2024-04-18 11:17:51.822646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.612 [2024-04-18 11:17:51.822667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.822686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.822757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.822797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.822840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.822881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.822920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.822942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.822982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.613 [2024-04-18 11:17:51.823557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.823965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.823985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.613 [2024-04-18 11:17:51.824288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.613 [2024-04-18 11:17:51.824307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.614 [2024-04-18 11:17:51.824916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.824978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.824998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.614 [2024-04-18 11:17:51.825342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.614 [2024-04-18 11:17:51.825363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.825951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.825979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.615 [2024-04-18 11:17:51.826238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.615 [2024-04-18 11:17:51.826477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.615 [2024-04-18 11:17:51.826507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.826884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.826924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.826964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.826985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:50.616 [2024-04-18 11:17:51.827233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.616 [2024-04-18 11:17:51.827521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009240 is same with the state(5) to be set 00:26:50.616 [2024-04-18 11:17:51.827576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:50.616 [2024-04-18 11:17:51.827592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:50.616 [2024-04-18 11:17:51.827610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126560 len:8 PRP1 0x0 PRP2 0x0 00:26:50.616 [2024-04-18 11:17:51.827629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.616 [2024-04-18 11:17:51.827913] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:26:50.616 [2024-04-18 11:17:51.827940] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:50.617 [2024-04-18 11:17:51.827960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:50.617 [2024-04-18 11:17:51.832275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:50.617 [2024-04-18 11:17:51.832351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:50.617 [2024-04-18 11:17:51.865750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:50.617 00:26:50.617 Latency(us) 00:26:50.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.617 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:50.617 Verification LBA range: start 0x0 length 0x4000 00:26:50.617 NVMe0n1 : 15.02 6705.37 26.19 241.15 0.00 18392.05 763.35 26929.34 00:26:50.617 =================================================================================================================== 00:26:50.617 Total : 6705.37 26.19 241.15 0.00 18392.05 763.35 26929.34 00:26:50.617 Received shutdown signal, test time was about 15.000000 seconds 00:26:50.617 00:26:50.617 Latency(us) 00:26:50.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.617 =================================================================================================================== 00:26:50.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.617 11:17:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:50.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.617 11:17:58 -- host/failover.sh@65 -- # count=3 00:26:50.617 11:17:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:50.617 11:17:58 -- host/failover.sh@73 -- # bdevperf_pid=83934 00:26:50.617 11:17:58 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:50.617 11:17:58 -- host/failover.sh@75 -- # waitforlisten 83934 /var/tmp/bdevperf.sock 00:26:50.617 11:17:58 -- common/autotest_common.sh@817 -- # '[' -z 83934 ']' 00:26:50.617 11:17:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.617 11:17:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.617 11:17:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.617 11:17:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.617 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:26:51.991 11:17:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.991 11:17:59 -- common/autotest_common.sh@850 -- # return 0 00:26:51.991 11:17:59 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.991 [2024-04-18 11:18:00.080579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.991 11:18:00 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:52.249 [2024-04-18 11:18:00.316780] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:52.249 11:18:00 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.507 NVMe0n1 00:26:52.507 11:18:00 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:52.765 00:26:53.023 11:18:01 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.281 00:26:53.281 11:18:01 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:53.281 11:18:01 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:53.538 11:18:01 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.795 11:18:01 -- host/failover.sh@87 -- # sleep 3 00:26:57.070 11:18:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:57.070 11:18:04 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.071 11:18:05 -- host/failover.sh@90 -- # run_test_pid=84072 00:26:57.071 11:18:05 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:57.071 11:18:05 -- host/failover.sh@92 -- # wait 84072 00:26:58.443 0 00:26:58.443 11:18:06 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:58.443 [2024-04-18 11:17:58.817637] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:58.443 [2024-04-18 11:17:58.817829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83934 ] 00:26:58.443 [2024-04-18 11:17:58.989437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.443 [2024-04-18 11:17:59.233137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.443 [2024-04-18 11:18:01.871329] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:58.443 [2024-04-18 11:18:01.871489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.443 [2024-04-18 11:18:01.871526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.443 [2024-04-18 11:18:01.871573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.443 [2024-04-18 11:18:01.871594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.443 [2024-04-18 11:18:01.871615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.443 [2024-04-18 11:18:01.871634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.443 [2024-04-18 11:18:01.871655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.443 [2024-04-18 11:18:01.871674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.443 [2024-04-18 11:18:01.871699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.443 [2024-04-18 11:18:01.871784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.443 [2024-04-18 11:18:01.871836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:26:58.443 [2024-04-18 11:18:01.876373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:58.443 Running I/O for 1 seconds... 00:26:58.443 00:26:58.443 Latency(us) 00:26:58.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.443 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:58.443 Verification LBA range: start 0x0 length 0x4000 00:26:58.443 NVMe0n1 : 1.01 6969.09 27.22 0.00 0.00 18280.93 2934.23 18588.39 00:26:58.443 =================================================================================================================== 00:26:58.443 Total : 6969.09 27.22 0.00 0.00 18280.93 2934.23 18588.39 00:26:58.443 11:18:06 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.443 11:18:06 -- host/failover.sh@95 -- # grep -q NVMe0 00:26:58.443 11:18:06 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.701 11:18:06 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.701 11:18:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:26:59.266 11:18:07 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:59.525 11:18:07 -- host/failover.sh@101 -- # sleep 3 00:27:02.805 11:18:10 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.805 11:18:10 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:02.805 11:18:10 -- host/failover.sh@108 -- # killprocess 83934 00:27:02.805 11:18:10 -- common/autotest_common.sh@936 -- # '[' -z 83934 ']' 00:27:02.805 11:18:10 -- common/autotest_common.sh@940 -- # kill -0 83934 00:27:02.805 11:18:10 -- common/autotest_common.sh@941 -- # uname 00:27:02.805 11:18:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.805 11:18:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83934 00:27:02.805 killing process with pid 83934 00:27:02.805 11:18:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.805 11:18:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.805 11:18:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83934' 00:27:02.805 11:18:10 -- common/autotest_common.sh@955 -- # kill 83934 00:27:02.805 11:18:10 -- common/autotest_common.sh@960 -- # wait 83934 00:27:04.177 11:18:11 -- host/failover.sh@110 -- # sync 00:27:04.177 11:18:12 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.177 11:18:12 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:04.177 11:18:12 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:04.177 11:18:12 -- host/failover.sh@116 -- # nvmftestfini 00:27:04.177 11:18:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:04.177 11:18:12 -- nvmf/common.sh@117 -- # sync 00:27:04.177 11:18:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.177 11:18:12 -- nvmf/common.sh@120 -- # set +e 00:27:04.177 11:18:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.177 11:18:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.177 rmmod nvme_tcp 00:27:04.177 rmmod nvme_fabrics 00:27:04.177 rmmod nvme_keyring 00:27:04.177 11:18:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.177 11:18:12 -- nvmf/common.sh@124 -- # set -e 00:27:04.177 11:18:12 -- nvmf/common.sh@125 -- # return 0 00:27:04.177 11:18:12 -- nvmf/common.sh@478 -- # '[' -n 83552 ']' 00:27:04.177 11:18:12 -- nvmf/common.sh@479 -- # killprocess 83552 00:27:04.177 11:18:12 -- common/autotest_common.sh@936 -- # '[' -z 83552 ']' 00:27:04.177 11:18:12 -- common/autotest_common.sh@940 -- # kill -0 83552 00:27:04.177 11:18:12 -- common/autotest_common.sh@941 -- # uname 00:27:04.177 11:18:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.177 11:18:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83552 00:27:04.177 killing process with pid 83552 00:27:04.177 11:18:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:04.177 11:18:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:04.177 11:18:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83552' 00:27:04.177 11:18:12 -- common/autotest_common.sh@955 -- # kill 83552 00:27:04.177 11:18:12 -- common/autotest_common.sh@960 -- # wait 83552 00:27:05.556 11:18:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:05.556 11:18:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:05.556 11:18:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:05.556 11:18:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.556 11:18:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.556 11:18:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.556 11:18:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.556 11:18:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.814 11:18:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:05.814 ************************************ 00:27:05.814 END TEST nvmf_failover 00:27:05.814 ************************************ 00:27:05.814 00:27:05.814 real 0m36.793s 00:27:05.814 user 2m21.168s 00:27:05.814 sys 0m4.950s 00:27:05.814 11:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:05.814 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 11:18:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:05.814 11:18:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:05.814 11:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.814 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:27:05.814 ************************************ 00:27:05.814 START TEST nvmf_discovery 00:27:05.814 ************************************ 00:27:05.814 11:18:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:05.814 * Looking for test storage... 00:27:05.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:05.814 11:18:13 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:05.814 11:18:13 -- nvmf/common.sh@7 -- # uname -s 00:27:05.814 11:18:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.814 11:18:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.814 11:18:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.814 11:18:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.814 11:18:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.814 11:18:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.814 11:18:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.814 11:18:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.814 11:18:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.814 11:18:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.814 11:18:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:05.814 11:18:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:05.814 11:18:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.814 11:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.814 11:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:05.814 11:18:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.814 11:18:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:05.814 11:18:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.814 11:18:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.814 11:18:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.814 11:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.814 11:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.814 11:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.814 11:18:14 -- paths/export.sh@5 -- # export PATH 00:27:05.814 11:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.814 11:18:14 -- nvmf/common.sh@47 -- # : 0 00:27:05.814 11:18:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.814 11:18:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.814 11:18:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.814 11:18:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.815 11:18:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.815 11:18:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.815 11:18:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.815 11:18:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.815 11:18:14 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:05.815 11:18:14 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:05.815 11:18:14 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:05.815 11:18:14 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:05.815 11:18:14 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:05.815 11:18:14 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:05.815 11:18:14 -- host/discovery.sh@25 -- # nvmftestinit 00:27:05.815 11:18:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:05.815 11:18:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.815 11:18:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:05.815 11:18:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:05.815 11:18:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:05.815 11:18:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.815 11:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.815 11:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.815 11:18:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:05.815 11:18:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:05.815 11:18:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:05.815 11:18:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:05.815 11:18:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:05.815 11:18:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:05.815 11:18:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.815 11:18:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.815 11:18:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:05.815 11:18:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:05.815 11:18:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:05.815 11:18:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:05.815 11:18:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:05.815 11:18:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.815 11:18:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:05.815 11:18:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:05.815 11:18:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:05.815 11:18:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:05.815 11:18:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:06.114 11:18:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:06.114 Cannot find device "nvmf_tgt_br" 00:27:06.114 11:18:14 -- nvmf/common.sh@155 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:06.114 Cannot find device "nvmf_tgt_br2" 00:27:06.114 11:18:14 -- nvmf/common.sh@156 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:06.114 11:18:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:06.114 Cannot find device "nvmf_tgt_br" 00:27:06.114 11:18:14 -- nvmf/common.sh@158 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:06.114 Cannot find device "nvmf_tgt_br2" 00:27:06.114 11:18:14 -- nvmf/common.sh@159 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:06.114 11:18:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:06.114 11:18:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.114 11:18:14 -- nvmf/common.sh@162 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.114 11:18:14 -- nvmf/common.sh@163 -- # true 00:27:06.114 11:18:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.114 11:18:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.114 11:18:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.114 11:18:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.114 11:18:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.114 11:18:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.114 11:18:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.114 11:18:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:06.114 11:18:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:06.114 11:18:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:06.114 11:18:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:06.114 11:18:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:06.114 11:18:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:06.114 11:18:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.114 11:18:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:06.114 11:18:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:06.114 11:18:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:06.114 11:18:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:06.114 11:18:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:06.114 11:18:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:06.393 11:18:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:06.393 11:18:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:06.393 11:18:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:06.393 11:18:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:06.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:27:06.393 00:27:06.393 --- 10.0.0.2 ping statistics --- 00:27:06.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.393 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:06.393 11:18:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:06.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:06.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:27:06.393 00:27:06.393 --- 10.0.0.3 ping statistics --- 00:27:06.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.393 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:06.393 11:18:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:06.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:27:06.394 00:27:06.394 --- 10.0.0.1 ping statistics --- 00:27:06.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.394 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:06.394 11:18:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.394 11:18:14 -- nvmf/common.sh@422 -- # return 0 00:27:06.394 11:18:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:06.394 11:18:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.394 11:18:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:06.394 11:18:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:06.394 11:18:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.394 11:18:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:06.394 11:18:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:06.394 11:18:14 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:06.394 11:18:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:06.394 11:18:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:06.394 11:18:14 -- common/autotest_common.sh@10 -- # set +x 00:27:06.394 11:18:14 -- nvmf/common.sh@470 -- # nvmfpid=84406 00:27:06.394 11:18:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:06.394 11:18:14 -- nvmf/common.sh@471 -- # waitforlisten 84406 00:27:06.394 11:18:14 -- common/autotest_common.sh@817 -- # '[' -z 84406 ']' 00:27:06.394 11:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.394 11:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:06.394 11:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.394 11:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:06.394 11:18:14 -- common/autotest_common.sh@10 -- # set +x 00:27:06.394 [2024-04-18 11:18:14.504669] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:06.394 [2024-04-18 11:18:14.504833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.652 [2024-04-18 11:18:14.681610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.910 [2024-04-18 11:18:14.968510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.910 [2024-04-18 11:18:14.968580] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.910 [2024-04-18 11:18:14.968604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.910 [2024-04-18 11:18:14.968640] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.910 [2024-04-18 11:18:14.968659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.910 [2024-04-18 11:18:14.968703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.476 11:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:07.476 11:18:15 -- common/autotest_common.sh@850 -- # return 0 00:27:07.476 11:18:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:07.476 11:18:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 11:18:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.476 11:18:15 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.476 11:18:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 [2024-04-18 11:18:15.491461] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.476 11:18:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.476 11:18:15 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:07.476 11:18:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 [2024-04-18 11:18:15.499616] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:07.476 11:18:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.476 11:18:15 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:07.476 11:18:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 null0 00:27:07.476 11:18:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.476 11:18:15 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:07.476 11:18:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 null1 00:27:07.476 11:18:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.476 11:18:15 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:07.476 11:18:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 11:18:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.476 11:18:15 -- host/discovery.sh@45 -- # hostpid=84456 00:27:07.476 11:18:15 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:07.476 11:18:15 -- host/discovery.sh@46 -- # waitforlisten 84456 /tmp/host.sock 00:27:07.476 11:18:15 -- common/autotest_common.sh@817 -- # '[' -z 84456 ']' 00:27:07.476 11:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:07.476 11:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:07.476 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:07.476 11:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:07.476 11:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:07.476 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:27:07.476 [2024-04-18 11:18:15.643650] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:07.476 [2024-04-18 11:18:15.643810] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84456 ] 00:27:07.734 [2024-04-18 11:18:15.820346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.992 [2024-04-18 11:18:16.108920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.558 11:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:08.558 11:18:16 -- common/autotest_common.sh@850 -- # return 0 00:27:08.558 11:18:16 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.558 11:18:16 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.558 11:18:16 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.558 11:18:16 -- host/discovery.sh@72 -- # notify_id=0 00:27:08.558 11:18:16 -- host/discovery.sh@83 -- # get_subsystem_names 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # sort 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # xargs 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.558 11:18:16 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:08.558 11:18:16 -- host/discovery.sh@84 -- # get_bdev_list 00:27:08.558 11:18:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.558 11:18:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- host/discovery.sh@55 -- # sort 00:27:08.558 11:18:16 -- host/discovery.sh@55 -- # xargs 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.558 11:18:16 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:08.558 11:18:16 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.558 11:18:16 -- host/discovery.sh@87 -- # get_subsystem_names 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:08.558 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # sort 00:27:08.558 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.558 11:18:16 -- host/discovery.sh@59 -- # xargs 00:27:08.558 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:08.816 11:18:16 -- host/discovery.sh@88 -- # get_bdev_list 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # sort 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # xargs 00:27:08.816 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:08.816 11:18:16 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@91 -- # get_subsystem_names 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # sort 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # xargs 00:27:08.816 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:08.816 11:18:16 -- host/discovery.sh@92 -- # get_bdev_list 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # sort 00:27:08.816 11:18:16 -- host/discovery.sh@55 -- # xargs 00:27:08.816 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:08.816 11:18:16 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 [2024-04-18 11:18:16.992161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.816 11:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.816 11:18:16 -- host/discovery.sh@97 -- # get_subsystem_names 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:08.816 11:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # sort 00:27:08.816 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:27:08.816 11:18:16 -- host/discovery.sh@59 -- # xargs 00:27:08.816 11:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.073 11:18:17 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:09.074 11:18:17 -- host/discovery.sh@98 -- # get_bdev_list 00:27:09.074 11:18:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.074 11:18:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:09.074 11:18:17 -- host/discovery.sh@55 -- # sort 00:27:09.074 11:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.074 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:27:09.074 11:18:17 -- host/discovery.sh@55 -- # xargs 00:27:09.074 11:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.074 11:18:17 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:09.074 11:18:17 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:09.074 11:18:17 -- host/discovery.sh@79 -- # expected_count=0 00:27:09.074 11:18:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:09.074 11:18:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:09.074 11:18:17 -- common/autotest_common.sh@901 -- # local max=10 00:27:09.074 11:18:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:09.074 11:18:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:09.074 11:18:17 -- host/discovery.sh@74 -- # jq '. | length' 00:27:09.074 11:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.074 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:27:09.074 11:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.074 11:18:17 -- host/discovery.sh@74 -- # notification_count=0 00:27:09.074 11:18:17 -- host/discovery.sh@75 -- # notify_id=0 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:09.074 11:18:17 -- common/autotest_common.sh@904 -- # return 0 00:27:09.074 11:18:17 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:09.074 11:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.074 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:27:09.074 11:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.074 11:18:17 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:09.074 11:18:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:09.074 11:18:17 -- common/autotest_common.sh@901 -- # local max=10 00:27:09.074 11:18:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:09.074 11:18:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:09.074 11:18:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:09.074 11:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.074 11:18:17 -- host/discovery.sh@59 -- # sort 00:27:09.074 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:27:09.074 11:18:17 -- host/discovery.sh@59 -- # xargs 00:27:09.074 11:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.074 11:18:17 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:27:09.074 11:18:17 -- common/autotest_common.sh@906 -- # sleep 1 00:27:09.640 [2024-04-18 11:18:17.626893] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:09.640 [2024-04-18 11:18:17.626965] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:09.640 [2024-04-18 11:18:17.627000] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:09.640 [2024-04-18 11:18:17.714081] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:09.640 [2024-04-18 11:18:17.777376] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:09.640 [2024-04-18 11:18:17.777440] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:10.206 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:10.206 11:18:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.206 11:18:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.206 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.206 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 11:18:18 -- host/discovery.sh@59 -- # sort 00:27:10.206 11:18:18 -- host/discovery.sh@59 -- # xargs 00:27:10.206 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.206 11:18:18 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.206 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:10.206 11:18:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.206 11:18:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.206 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.206 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 11:18:18 -- host/discovery.sh@55 -- # sort 00:27:10.206 11:18:18 -- host/discovery.sh@55 -- # xargs 00:27:10.206 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.206 11:18:18 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.206 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:10.206 11:18:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:10.206 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.206 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 11:18:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:10.206 11:18:18 -- host/discovery.sh@63 -- # sort -n 00:27:10.206 11:18:18 -- host/discovery.sh@63 -- # xargs 00:27:10.206 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:27:10.206 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.206 11:18:18 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:10.206 11:18:18 -- host/discovery.sh@79 -- # expected_count=1 00:27:10.206 11:18:18 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:10.206 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:10.206 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.206 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:10.206 11:18:18 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:10.206 11:18:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:10.206 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.206 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 11:18:18 -- host/discovery.sh@74 -- # jq '. | length' 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- host/discovery.sh@74 -- # notification_count=1 00:27:10.465 11:18:18 -- host/discovery.sh@75 -- # notify_id=1 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.465 11:18:18 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.465 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # sort 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # xargs 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:10.465 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.465 11:18:18 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:10.465 11:18:18 -- host/discovery.sh@79 -- # expected_count=1 00:27:10.465 11:18:18 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:10.465 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:10.465 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.465 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:10.465 11:18:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:10.465 11:18:18 -- host/discovery.sh@74 -- # jq '. | length' 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- host/discovery.sh@74 -- # notification_count=1 00:27:10.465 11:18:18 -- host/discovery.sh@75 -- # notify_id=2 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.465 11:18:18 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 [2024-04-18 11:18:18.585748] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.465 [2024-04-18 11:18:18.586110] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:10.465 [2024-04-18 11:18:18.586156] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.465 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:10.465 11:18:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.465 11:18:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 11:18:18 -- host/discovery.sh@59 -- # sort 00:27:10.465 11:18:18 -- host/discovery.sh@59 -- # xargs 00:27:10.465 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.465 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.465 11:18:18 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.465 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:10.465 11:18:18 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.465 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.465 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # sort 00:27:10.465 11:18:18 -- host/discovery.sh@55 -- # xargs 00:27:10.466 [2024-04-18 11:18:18.673579] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:10.724 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.724 11:18:18 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:10.724 11:18:18 -- common/autotest_common.sh@904 -- # return 0 00:27:10.724 11:18:18 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:10.724 11:18:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:10.724 11:18:18 -- common/autotest_common.sh@901 -- # local max=10 00:27:10.724 11:18:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:10.724 11:18:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:10.724 11:18:18 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:10.724 11:18:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:10.724 11:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.724 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:10.724 11:18:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:10.724 11:18:18 -- host/discovery.sh@63 -- # sort -n 00:27:10.724 11:18:18 -- host/discovery.sh@63 -- # xargs 00:27:10.724 11:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.724 [2024-04-18 11:18:18.739061] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:10.724 [2024-04-18 11:18:18.739134] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:10.724 [2024-04-18 11:18:18.739150] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:10.724 11:18:18 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:10.724 11:18:18 -- common/autotest_common.sh@906 -- # sleep 1 00:27:11.658 11:18:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:11.658 11:18:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:11.658 11:18:19 -- host/discovery.sh@63 -- # xargs 00:27:11.658 11:18:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:11.658 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.658 11:18:19 -- host/discovery.sh@63 -- # sort -n 00:27:11.658 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.658 11:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:11.658 11:18:19 -- common/autotest_common.sh@904 -- # return 0 00:27:11.658 11:18:19 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:11.658 11:18:19 -- host/discovery.sh@79 -- # expected_count=0 00:27:11.658 11:18:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:11.658 11:18:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:11.658 11:18:19 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.658 11:18:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:11.658 11:18:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:11.658 11:18:19 -- host/discovery.sh@74 -- # jq '. | length' 00:27:11.658 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.658 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.658 11:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.658 11:18:19 -- host/discovery.sh@74 -- # notification_count=0 00:27:11.658 11:18:19 -- host/discovery.sh@75 -- # notify_id=2 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:11.658 11:18:19 -- common/autotest_common.sh@904 -- # return 0 00:27:11.658 11:18:19 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:11.658 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.658 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.658 [2024-04-18 11:18:19.871644] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:11.658 [2024-04-18 11:18:19.871709] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:11.658 [2024-04-18 11:18:19.874186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.658 [2024-04-18 11:18:19.874238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.658 [2024-04-18 11:18:19.874260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.658 [2024-04-18 11:18:19.874274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.658 [2024-04-18 11:18:19.874289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.658 [2024-04-18 11:18:19.874302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.658 [2024-04-18 11:18:19.874317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.658 [2024-04-18 11:18:19.874330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.658 [2024-04-18 11:18:19.874343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.658 11:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.658 11:18:19 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.658 11:18:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.658 11:18:19 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.658 11:18:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:11.658 11:18:19 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:11.918 11:18:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.918 11:18:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.918 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.918 11:18:19 -- host/discovery.sh@59 -- # xargs 00:27:11.918 11:18:19 -- host/discovery.sh@59 -- # sort 00:27:11.918 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.918 [2024-04-18 11:18:19.884092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.894133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.918 [2024-04-18 11:18:19.894312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.894385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.894417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.918 [2024-04-18 11:18:19.894435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.918 [2024-04-18 11:18:19.894472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.894495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.918 [2024-04-18 11:18:19.894510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.918 [2024-04-18 11:18:19.894525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.918 [2024-04-18 11:18:19.894551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.918 11:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.918 [2024-04-18 11:18:19.904233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.918 [2024-04-18 11:18:19.904349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.904424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.904448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.918 [2024-04-18 11:18:19.904464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.918 [2024-04-18 11:18:19.904488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.904509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.918 [2024-04-18 11:18:19.904522] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.918 [2024-04-18 11:18:19.904552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.918 [2024-04-18 11:18:19.904574] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.918 [2024-04-18 11:18:19.914314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.918 [2024-04-18 11:18:19.914464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.914527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.914551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.918 [2024-04-18 11:18:19.914566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.918 [2024-04-18 11:18:19.914591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.914612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.918 [2024-04-18 11:18:19.914625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.918 [2024-04-18 11:18:19.914638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.918 [2024-04-18 11:18:19.914701] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.918 [2024-04-18 11:18:19.924424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.918 [2024-04-18 11:18:19.924544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.924608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.924632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.918 [2024-04-18 11:18:19.924648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.918 [2024-04-18 11:18:19.924672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.924708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.918 [2024-04-18 11:18:19.924724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.918 [2024-04-18 11:18:19.924738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.918 [2024-04-18 11:18:19.924760] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.918 11:18:19 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.918 11:18:19 -- common/autotest_common.sh@904 -- # return 0 00:27:11.918 11:18:19 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:11.918 11:18:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:11.918 11:18:19 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.918 11:18:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.918 [2024-04-18 11:18:19.934505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.918 [2024-04-18 11:18:19.934609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 11:18:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:11.918 [2024-04-18 11:18:19.934669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.918 [2024-04-18 11:18:19.934693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.918 [2024-04-18 11:18:19.934709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.918 [2024-04-18 11:18:19.934732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.918 [2024-04-18 11:18:19.934769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.918 [2024-04-18 11:18:19.934785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.918 [2024-04-18 11:18:19.934798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.919 [2024-04-18 11:18:19.934820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.919 11:18:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:11.919 11:18:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.919 11:18:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.919 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.919 11:18:19 -- host/discovery.sh@55 -- # sort 00:27:11.919 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 11:18:19 -- host/discovery.sh@55 -- # xargs 00:27:11.919 [2024-04-18 11:18:19.944577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.919 [2024-04-18 11:18:19.944708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.919 [2024-04-18 11:18:19.944772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.919 [2024-04-18 11:18:19.944797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.919 [2024-04-18 11:18:19.944813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.919 [2024-04-18 11:18:19.944850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.919 [2024-04-18 11:18:19.944871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.919 [2024-04-18 11:18:19.944891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.919 [2024-04-18 11:18:19.944914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.919 [2024-04-18 11:18:19.944950] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.919 [2024-04-18 11:18:19.954671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:11.919 [2024-04-18 11:18:19.954791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.919 [2024-04-18 11:18:19.954856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.919 [2024-04-18 11:18:19.954880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:27:11.919 [2024-04-18 11:18:19.954896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:27:11.919 [2024-04-18 11:18:19.954921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:27:11.919 [2024-04-18 11:18:19.954943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.919 [2024-04-18 11:18:19.954956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:11.919 [2024-04-18 11:18:19.954970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.919 [2024-04-18 11:18:19.954993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.919 [2024-04-18 11:18:19.957382] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:11.919 [2024-04-18 11:18:19.957430] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:11.919 11:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.919 11:18:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:11.919 11:18:19 -- common/autotest_common.sh@904 -- # return 0 00:27:11.919 11:18:19 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:11.919 11:18:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:11.919 11:18:19 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.919 11:18:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.919 11:18:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:11.919 11:18:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:11.919 11:18:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:11.919 11:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.919 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 11:18:19 -- host/discovery.sh@63 -- # sort -n 00:27:11.919 11:18:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:11.919 11:18:19 -- host/discovery.sh@63 -- # xargs 00:27:11.919 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:27:11.919 11:18:20 -- common/autotest_common.sh@904 -- # return 0 00:27:11.919 11:18:20 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:11.919 11:18:20 -- host/discovery.sh@79 -- # expected_count=0 00:27:11.919 11:18:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:11.919 11:18:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:11.919 11:18:20 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.919 11:18:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:11.919 11:18:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:11.919 11:18:20 -- host/discovery.sh@74 -- # jq '. | length' 00:27:11.919 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.919 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.919 11:18:20 -- host/discovery.sh@74 -- # notification_count=0 00:27:11.919 11:18:20 -- host/discovery.sh@75 -- # notify_id=2 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:11.919 11:18:20 -- common/autotest_common.sh@904 -- # return 0 00:27:11.919 11:18:20 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:11.919 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.919 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.919 11:18:20 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:11.919 11:18:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:11.919 11:18:20 -- common/autotest_common.sh@901 -- # local max=10 00:27:11.919 11:18:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:11.919 11:18:20 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:11.919 11:18:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.919 11:18:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.919 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.919 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 11:18:20 -- host/discovery.sh@59 -- # sort 00:27:11.919 11:18:20 -- host/discovery.sh@59 -- # xargs 00:27:11.919 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:12.177 11:18:20 -- common/autotest_common.sh@904 -- # return 0 00:27:12.177 11:18:20 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:12.177 11:18:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:12.177 11:18:20 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.177 11:18:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:12.177 11:18:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.177 11:18:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.177 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.177 11:18:20 -- host/discovery.sh@55 -- # sort 00:27:12.177 11:18:20 -- host/discovery.sh@55 -- # xargs 00:27:12.177 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:12.177 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:12.177 11:18:20 -- common/autotest_common.sh@904 -- # return 0 00:27:12.177 11:18:20 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:12.177 11:18:20 -- host/discovery.sh@79 -- # expected_count=2 00:27:12.177 11:18:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:12.177 11:18:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:12.177 11:18:20 -- common/autotest_common.sh@901 -- # local max=10 00:27:12.177 11:18:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:12.177 11:18:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:12.177 11:18:20 -- host/discovery.sh@74 -- # jq '. | length' 00:27:12.177 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.177 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:12.177 11:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.177 11:18:20 -- host/discovery.sh@74 -- # notification_count=2 00:27:12.177 11:18:20 -- host/discovery.sh@75 -- # notify_id=4 00:27:12.177 11:18:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:12.177 11:18:20 -- common/autotest_common.sh@904 -- # return 0 00:27:12.177 11:18:20 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.177 11:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.177 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:13.109 [2024-04-18 11:18:21.290473] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:13.109 [2024-04-18 11:18:21.290521] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:13.109 [2024-04-18 11:18:21.290568] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:13.367 [2024-04-18 11:18:21.376737] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:13.367 [2024-04-18 11:18:21.446150] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:13.367 [2024-04-18 11:18:21.446256] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:13.367 11:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.367 11:18:21 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@638 -- # local es=0 00:27:13.367 11:18:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.367 11:18:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.367 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.367 2024/04/18 11:18:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:13.367 request: 00:27:13.367 { 00:27:13.367 "method": "bdev_nvme_start_discovery", 00:27:13.367 "params": { 00:27:13.367 "name": "nvme", 00:27:13.367 "trtype": "tcp", 00:27:13.367 "traddr": "10.0.0.2", 00:27:13.367 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:13.367 "adrfam": "ipv4", 00:27:13.367 "trsvcid": "8009", 00:27:13.367 "wait_for_attach": true 00:27:13.367 } 00:27:13.367 } 00:27:13.367 Got JSON-RPC error response 00:27:13.367 GoRPCClient: error on JSON-RPC call 00:27:13.367 11:18:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:13.367 11:18:21 -- common/autotest_common.sh@641 -- # es=1 00:27:13.367 11:18:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:13.367 11:18:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:13.367 11:18:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:13.367 11:18:21 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:13.367 11:18:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:13.367 11:18:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:13.367 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.367 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.367 11:18:21 -- host/discovery.sh@67 -- # sort 00:27:13.367 11:18:21 -- host/discovery.sh@67 -- # xargs 00:27:13.367 11:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.367 11:18:21 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:13.367 11:18:21 -- host/discovery.sh@146 -- # get_bdev_list 00:27:13.367 11:18:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.367 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.367 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.367 11:18:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.367 11:18:21 -- host/discovery.sh@55 -- # sort 00:27:13.367 11:18:21 -- host/discovery.sh@55 -- # xargs 00:27:13.367 11:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.367 11:18:21 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:13.367 11:18:21 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@638 -- # local es=0 00:27:13.367 11:18:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:13.367 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.367 11:18:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:13.367 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.367 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.626 2024/04/18 11:18:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:13.626 request: 00:27:13.626 { 00:27:13.626 "method": "bdev_nvme_start_discovery", 00:27:13.626 "params": { 00:27:13.626 "name": "nvme_second", 00:27:13.626 "trtype": "tcp", 00:27:13.626 "traddr": "10.0.0.2", 00:27:13.626 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:13.626 "adrfam": "ipv4", 00:27:13.626 "trsvcid": "8009", 00:27:13.626 "wait_for_attach": true 00:27:13.626 } 00:27:13.626 } 00:27:13.626 Got JSON-RPC error response 00:27:13.626 GoRPCClient: error on JSON-RPC call 00:27:13.626 11:18:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:13.626 11:18:21 -- common/autotest_common.sh@641 -- # es=1 00:27:13.626 11:18:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:13.626 11:18:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:13.626 11:18:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:13.626 11:18:21 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:13.626 11:18:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:13.626 11:18:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:13.626 11:18:21 -- host/discovery.sh@67 -- # sort 00:27:13.626 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.626 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.626 11:18:21 -- host/discovery.sh@67 -- # xargs 00:27:13.626 11:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.626 11:18:21 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:13.626 11:18:21 -- host/discovery.sh@152 -- # get_bdev_list 00:27:13.626 11:18:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.626 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.626 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.626 11:18:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.626 11:18:21 -- host/discovery.sh@55 -- # sort 00:27:13.626 11:18:21 -- host/discovery.sh@55 -- # xargs 00:27:13.626 11:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.626 11:18:21 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:13.626 11:18:21 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:13.626 11:18:21 -- common/autotest_common.sh@638 -- # local es=0 00:27:13.626 11:18:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:13.626 11:18:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:13.626 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.626 11:18:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:13.626 11:18:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.626 11:18:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:13.626 11:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.626 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.559 [2024-04-18 11:18:22.710907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.559 [2024-04-18 11:18:22.711068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.559 [2024-04-18 11:18:22.711097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010640 with addr=10.0.0.2, port=8010 00:27:14.559 [2024-04-18 11:18:22.711185] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:14.559 [2024-04-18 11:18:22.711204] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:14.559 [2024-04-18 11:18:22.711220] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:15.928 [2024-04-18 11:18:23.710930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.928 [2024-04-18 11:18:23.711100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.928 [2024-04-18 11:18:23.711141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010840 with addr=10.0.0.2, port=8010 00:27:15.928 [2024-04-18 11:18:23.711207] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:15.928 [2024-04-18 11:18:23.711224] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:15.928 [2024-04-18 11:18:23.711238] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:16.494 [2024-04-18 11:18:24.710636] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:16.494 2024/04/18 11:18:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:27:16.494 request: 00:27:16.494 { 00:27:16.494 "method": "bdev_nvme_start_discovery", 00:27:16.494 "params": { 00:27:16.494 "name": "nvme_second", 00:27:16.494 "trtype": "tcp", 00:27:16.494 "traddr": "10.0.0.2", 00:27:16.494 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:16.494 "adrfam": "ipv4", 00:27:16.752 "trsvcid": "8010", 00:27:16.752 "attach_timeout_ms": 3000 00:27:16.752 } 00:27:16.752 } 00:27:16.752 Got JSON-RPC error response 00:27:16.752 GoRPCClient: error on JSON-RPC call 00:27:16.752 11:18:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:16.752 11:18:24 -- common/autotest_common.sh@641 -- # es=1 00:27:16.752 11:18:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:16.752 11:18:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:16.752 11:18:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:16.752 11:18:24 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:16.752 11:18:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:16.752 11:18:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:16.752 11:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.752 11:18:24 -- common/autotest_common.sh@10 -- # set +x 00:27:16.752 11:18:24 -- host/discovery.sh@67 -- # sort 00:27:16.752 11:18:24 -- host/discovery.sh@67 -- # xargs 00:27:16.752 11:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.752 11:18:24 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:16.752 11:18:24 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:16.752 11:18:24 -- host/discovery.sh@161 -- # kill 84456 00:27:16.752 11:18:24 -- host/discovery.sh@162 -- # nvmftestfini 00:27:16.752 11:18:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:16.752 11:18:24 -- nvmf/common.sh@117 -- # sync 00:27:16.752 11:18:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.752 11:18:24 -- nvmf/common.sh@120 -- # set +e 00:27:16.752 11:18:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.752 11:18:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.752 rmmod nvme_tcp 00:27:16.752 rmmod nvme_fabrics 00:27:16.752 rmmod nvme_keyring 00:27:16.752 11:18:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.752 11:18:24 -- nvmf/common.sh@124 -- # set -e 00:27:16.752 11:18:24 -- nvmf/common.sh@125 -- # return 0 00:27:16.752 11:18:24 -- nvmf/common.sh@478 -- # '[' -n 84406 ']' 00:27:16.752 11:18:24 -- nvmf/common.sh@479 -- # killprocess 84406 00:27:16.752 11:18:24 -- common/autotest_common.sh@936 -- # '[' -z 84406 ']' 00:27:16.752 11:18:24 -- common/autotest_common.sh@940 -- # kill -0 84406 00:27:16.752 11:18:24 -- common/autotest_common.sh@941 -- # uname 00:27:16.752 11:18:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.752 11:18:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84406 00:27:16.752 11:18:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:16.752 killing process with pid 84406 00:27:16.752 11:18:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:16.752 11:18:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84406' 00:27:16.752 11:18:24 -- common/autotest_common.sh@955 -- # kill 84406 00:27:16.752 11:18:24 -- common/autotest_common.sh@960 -- # wait 84406 00:27:18.127 11:18:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:18.127 11:18:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:18.127 11:18:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.127 11:18:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.127 11:18:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.127 11:18:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.127 11:18:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:18.127 00:27:18.127 real 0m12.155s 00:27:18.127 user 0m23.723s 00:27:18.127 sys 0m1.911s 00:27:18.127 ************************************ 00:27:18.127 END TEST nvmf_discovery 00:27:18.127 ************************************ 00:27:18.127 11:18:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:18.127 11:18:26 -- common/autotest_common.sh@10 -- # set +x 00:27:18.127 11:18:26 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:18.127 11:18:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:18.127 11:18:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:18.127 11:18:26 -- common/autotest_common.sh@10 -- # set +x 00:27:18.127 ************************************ 00:27:18.127 START TEST nvmf_discovery_remove_ifc 00:27:18.127 ************************************ 00:27:18.127 11:18:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:18.127 * Looking for test storage... 00:27:18.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:18.127 11:18:26 -- nvmf/common.sh@7 -- # uname -s 00:27:18.127 11:18:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.127 11:18:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.127 11:18:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.127 11:18:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.127 11:18:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.127 11:18:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.127 11:18:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.127 11:18:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.127 11:18:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.127 11:18:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:18.127 11:18:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:18.127 11:18:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.127 11:18:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.127 11:18:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:18.127 11:18:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.127 11:18:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.127 11:18:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.127 11:18:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.127 11:18:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.127 11:18:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.127 11:18:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.127 11:18:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.127 11:18:26 -- paths/export.sh@5 -- # export PATH 00:27:18.127 11:18:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.127 11:18:26 -- nvmf/common.sh@47 -- # : 0 00:27:18.127 11:18:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.127 11:18:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.127 11:18:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.127 11:18:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.127 11:18:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.127 11:18:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.127 11:18:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.127 11:18:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:18.127 11:18:26 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:18.127 11:18:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:18.127 11:18:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.127 11:18:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:18.127 11:18:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:18.127 11:18:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:18.127 11:18:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.127 11:18:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.127 11:18:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.127 11:18:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:18.127 11:18:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:18.127 11:18:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.127 11:18:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.127 11:18:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:18.127 11:18:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:18.127 11:18:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:18.127 11:18:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:18.127 11:18:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:18.127 11:18:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.127 11:18:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:18.127 11:18:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:18.127 11:18:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:18.128 11:18:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:18.128 11:18:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:18.128 11:18:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:18.128 Cannot find device "nvmf_tgt_br" 00:27:18.128 11:18:26 -- nvmf/common.sh@155 -- # true 00:27:18.128 11:18:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:18.128 Cannot find device "nvmf_tgt_br2" 00:27:18.128 11:18:26 -- nvmf/common.sh@156 -- # true 00:27:18.128 11:18:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:18.128 11:18:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:18.128 Cannot find device "nvmf_tgt_br" 00:27:18.128 11:18:26 -- nvmf/common.sh@158 -- # true 00:27:18.128 11:18:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:18.386 Cannot find device "nvmf_tgt_br2" 00:27:18.386 11:18:26 -- nvmf/common.sh@159 -- # true 00:27:18.386 11:18:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:18.386 11:18:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:18.386 11:18:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.386 11:18:26 -- nvmf/common.sh@162 -- # true 00:27:18.386 11:18:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.386 11:18:26 -- nvmf/common.sh@163 -- # true 00:27:18.386 11:18:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.386 11:18:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.386 11:18:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.386 11:18:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.386 11:18:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.386 11:18:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.386 11:18:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.386 11:18:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:18.386 11:18:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:18.386 11:18:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:18.386 11:18:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:18.386 11:18:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:18.386 11:18:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:18.386 11:18:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.386 11:18:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.386 11:18:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.386 11:18:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:18.386 11:18:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:18.386 11:18:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.386 11:18:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.386 11:18:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.386 11:18:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.386 11:18:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.386 11:18:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:18.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:27:18.386 00:27:18.386 --- 10.0.0.2 ping statistics --- 00:27:18.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.386 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:18.386 11:18:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:18.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:27:18.386 00:27:18.386 --- 10.0.0.3 ping statistics --- 00:27:18.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.386 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:18.386 11:18:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:27:18.644 00:27:18.644 --- 10.0.0.1 ping statistics --- 00:27:18.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.644 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:18.644 11:18:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.644 11:18:26 -- nvmf/common.sh@422 -- # return 0 00:27:18.644 11:18:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:18.644 11:18:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.644 11:18:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:18.644 11:18:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:18.644 11:18:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.644 11:18:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:18.644 11:18:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:18.644 11:18:26 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:18.644 11:18:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:18.644 11:18:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:18.644 11:18:26 -- common/autotest_common.sh@10 -- # set +x 00:27:18.644 11:18:26 -- nvmf/common.sh@470 -- # nvmfpid=84958 00:27:18.644 11:18:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:18.644 11:18:26 -- nvmf/common.sh@471 -- # waitforlisten 84958 00:27:18.644 11:18:26 -- common/autotest_common.sh@817 -- # '[' -z 84958 ']' 00:27:18.644 11:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.644 11:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.644 11:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.644 11:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.644 11:18:26 -- common/autotest_common.sh@10 -- # set +x 00:27:18.644 [2024-04-18 11:18:26.754252] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:18.644 [2024-04-18 11:18:26.754430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.903 [2024-04-18 11:18:26.931569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.160 [2024-04-18 11:18:27.198651] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.160 [2024-04-18 11:18:27.198720] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.160 [2024-04-18 11:18:27.198755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.160 [2024-04-18 11:18:27.198779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.160 [2024-04-18 11:18:27.198794] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.160 [2024-04-18 11:18:27.198838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.726 11:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:19.726 11:18:27 -- common/autotest_common.sh@850 -- # return 0 00:27:19.726 11:18:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:19.726 11:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.726 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:27:19.726 11:18:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.726 11:18:27 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:19.726 11:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.726 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:27:19.726 [2024-04-18 11:18:27.761425] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.726 [2024-04-18 11:18:27.769557] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:19.726 null0 00:27:19.726 [2024-04-18 11:18:27.801525] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.726 11:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.726 11:18:27 -- host/discovery_remove_ifc.sh@59 -- # hostpid=85008 00:27:19.726 11:18:27 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:19.726 11:18:27 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85008 /tmp/host.sock 00:27:19.726 11:18:27 -- common/autotest_common.sh@817 -- # '[' -z 85008 ']' 00:27:19.726 11:18:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:19.726 11:18:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:19.726 11:18:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:19.726 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:19.726 11:18:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:19.726 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:27:19.726 [2024-04-18 11:18:27.937133] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:19.726 [2024-04-18 11:18:27.937318] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:27:19.984 [2024-04-18 11:18:28.110611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.242 [2024-04-18 11:18:28.360502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.807 11:18:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:20.807 11:18:28 -- common/autotest_common.sh@850 -- # return 0 00:27:20.807 11:18:28 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.807 11:18:28 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:20.807 11:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.807 11:18:28 -- common/autotest_common.sh@10 -- # set +x 00:27:20.807 11:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.807 11:18:28 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:20.807 11:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.807 11:18:28 -- common/autotest_common.sh@10 -- # set +x 00:27:21.069 11:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.069 11:18:29 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:21.069 11:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.069 11:18:29 -- common/autotest_common.sh@10 -- # set +x 00:27:22.001 [2024-04-18 11:18:30.176540] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:22.001 [2024-04-18 11:18:30.176604] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:22.001 [2024-04-18 11:18:30.176639] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:22.258 [2024-04-18 11:18:30.262812] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:22.258 [2024-04-18 11:18:30.327931] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:22.258 [2024-04-18 11:18:30.328041] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:22.258 [2024-04-18 11:18:30.328145] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:22.258 [2024-04-18 11:18:30.328209] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:22.258 [2024-04-18 11:18:30.328261] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:22.258 11:18:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.258 11:18:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.258 11:18:30 -- common/autotest_common.sh@10 -- # set +x 00:27:22.258 [2024-04-18 11:18:30.335187] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:27:22.258 11:18:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:27:22.258 11:18:30 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.259 11:18:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.259 11:18:30 -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.259 11:18:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:22.259 11:18:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.630 11:18:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.630 11:18:31 -- common/autotest_common.sh@10 -- # set +x 00:27:23.630 11:18:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.630 11:18:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.564 11:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.564 11:18:32 -- common/autotest_common.sh@10 -- # set +x 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.564 11:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:24.564 11:18:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.497 11:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.497 11:18:33 -- common/autotest_common.sh@10 -- # set +x 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.497 11:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.497 11:18:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.870 11:18:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.870 11:18:34 -- common/autotest_common.sh@10 -- # set +x 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.870 11:18:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.870 11:18:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.804 11:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.804 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.804 11:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.804 [2024-04-18 11:18:35.757780] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:27.804 [2024-04-18 11:18:35.757871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:27.804 [2024-04-18 11:18:35.757895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.804 [2024-04-18 11:18:35.757917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:27.804 [2024-04-18 11:18:35.757931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.804 [2024-04-18 11:18:35.757947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:27.804 [2024-04-18 11:18:35.757960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.804 [2024-04-18 11:18:35.757975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:27.804 [2024-04-18 11:18:35.757990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.804 [2024-04-18 11:18:35.758004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:27.804 [2024-04-18 11:18:35.758026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:27.804 [2024-04-18 11:18:35.758039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:27:27.804 [2024-04-18 11:18:35.767776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.804 11:18:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.804 [2024-04-18 11:18:35.777815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.739 11:18:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.739 11:18:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.739 11:18:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.739 11:18:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.740 11:18:36 -- common/autotest_common.sh@10 -- # set +x 00:27:28.740 11:18:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.740 11:18:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.740 [2024-04-18 11:18:36.837238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:29.673 [2024-04-18 11:18:37.861264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:29.673 [2024-04-18 11:18:37.861435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:27:29.673 [2024-04-18 11:18:37.861509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:27:29.673 [2024-04-18 11:18:37.863076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:27:29.673 [2024-04-18 11:18:37.863225] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.673 [2024-04-18 11:18:37.863315] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:29.673 [2024-04-18 11:18:37.863436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.673 [2024-04-18 11:18:37.863534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.673 [2024-04-18 11:18:37.863625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.674 [2024-04-18 11:18:37.863691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.674 [2024-04-18 11:18:37.863752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.674 [2024-04-18 11:18:37.863807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.674 [2024-04-18 11:18:37.863867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.674 [2024-04-18 11:18:37.863945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.674 [2024-04-18 11:18:37.864001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.674 [2024-04-18 11:18:37.864065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.674 [2024-04-18 11:18:37.864150] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:29.674 [2024-04-18 11:18:37.864237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:27:29.674 [2024-04-18 11:18:37.864563] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:29.674 [2024-04-18 11:18:37.864682] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:29.674 11:18:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.674 11:18:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.674 11:18:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.047 11:18:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.047 11:18:38 -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 11:18:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.047 11:18:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.047 11:18:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.047 11:18:38 -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 11:18:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.047 11:18:39 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:31.047 11:18:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.981 [2024-04-18 11:18:39.868898] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:31.981 [2024-04-18 11:18:39.868954] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:31.981 [2024-04-18 11:18:39.868987] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:31.981 [2024-04-18 11:18:39.957087] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:31.981 [2024-04-18 11:18:40.019840] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:31.981 [2024-04-18 11:18:40.019916] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:31.981 [2024-04-18 11:18:40.019991] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:31.981 [2024-04-18 11:18:40.020019] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:31.981 [2024-04-18 11:18:40.020037] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.981 11:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.981 11:18:40 -- common/autotest_common.sh@10 -- # set +x 00:27:31.981 [2024-04-18 11:18:40.028222] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.981 11:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:31.981 11:18:40 -- host/discovery_remove_ifc.sh@90 -- # killprocess 85008 00:27:31.981 11:18:40 -- common/autotest_common.sh@936 -- # '[' -z 85008 ']' 00:27:31.981 11:18:40 -- common/autotest_common.sh@940 -- # kill -0 85008 00:27:31.981 11:18:40 -- common/autotest_common.sh@941 -- # uname 00:27:31.981 11:18:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:31.981 11:18:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85008 00:27:31.981 killing process with pid 85008 00:27:31.981 11:18:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:31.981 11:18:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:31.981 11:18:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85008' 00:27:31.981 11:18:40 -- common/autotest_common.sh@955 -- # kill 85008 00:27:31.981 11:18:40 -- common/autotest_common.sh@960 -- # wait 85008 00:27:33.374 11:18:41 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:33.374 11:18:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:33.374 11:18:41 -- nvmf/common.sh@117 -- # sync 00:27:33.374 11:18:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.374 11:18:41 -- nvmf/common.sh@120 -- # set +e 00:27:33.374 11:18:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.374 11:18:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.374 rmmod nvme_tcp 00:27:33.374 rmmod nvme_fabrics 00:27:33.374 rmmod nvme_keyring 00:27:33.375 11:18:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.375 11:18:41 -- nvmf/common.sh@124 -- # set -e 00:27:33.375 11:18:41 -- nvmf/common.sh@125 -- # return 0 00:27:33.375 11:18:41 -- nvmf/common.sh@478 -- # '[' -n 84958 ']' 00:27:33.375 11:18:41 -- nvmf/common.sh@479 -- # killprocess 84958 00:27:33.375 11:18:41 -- common/autotest_common.sh@936 -- # '[' -z 84958 ']' 00:27:33.375 11:18:41 -- common/autotest_common.sh@940 -- # kill -0 84958 00:27:33.375 11:18:41 -- common/autotest_common.sh@941 -- # uname 00:27:33.375 11:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:33.375 11:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84958 00:27:33.375 killing process with pid 84958 00:27:33.375 11:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:33.375 11:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:33.375 11:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84958' 00:27:33.375 11:18:41 -- common/autotest_common.sh@955 -- # kill 84958 00:27:33.375 11:18:41 -- common/autotest_common.sh@960 -- # wait 84958 00:27:34.780 11:18:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:34.780 11:18:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:34.780 11:18:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.780 11:18:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.780 11:18:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.780 11:18:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.780 11:18:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:34.780 00:27:34.780 real 0m16.436s 00:27:34.780 user 0m27.505s 00:27:34.780 sys 0m1.885s 00:27:34.780 11:18:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:34.780 ************************************ 00:27:34.780 11:18:42 -- common/autotest_common.sh@10 -- # set +x 00:27:34.780 END TEST nvmf_discovery_remove_ifc 00:27:34.780 ************************************ 00:27:34.780 11:18:42 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:34.780 11:18:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:34.780 11:18:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:34.780 11:18:42 -- common/autotest_common.sh@10 -- # set +x 00:27:34.780 ************************************ 00:27:34.780 START TEST nvmf_identify_kernel_target 00:27:34.780 ************************************ 00:27:34.780 11:18:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:34.780 * Looking for test storage... 00:27:34.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:34.780 11:18:42 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:34.780 11:18:42 -- nvmf/common.sh@7 -- # uname -s 00:27:34.780 11:18:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.780 11:18:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.780 11:18:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.780 11:18:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.780 11:18:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.780 11:18:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.780 11:18:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.780 11:18:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.780 11:18:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.780 11:18:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:34.780 11:18:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:34.780 11:18:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.780 11:18:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.780 11:18:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:34.780 11:18:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.780 11:18:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:34.780 11:18:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.780 11:18:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.780 11:18:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.780 11:18:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.780 11:18:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.780 11:18:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.780 11:18:42 -- paths/export.sh@5 -- # export PATH 00:27:34.780 11:18:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.780 11:18:42 -- nvmf/common.sh@47 -- # : 0 00:27:34.780 11:18:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.780 11:18:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.780 11:18:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.780 11:18:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.780 11:18:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.780 11:18:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.780 11:18:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.780 11:18:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.780 11:18:42 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:34.780 11:18:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:34.780 11:18:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.780 11:18:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:34.780 11:18:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:34.780 11:18:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:34.780 11:18:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.780 11:18:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.780 11:18:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.780 11:18:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:34.780 11:18:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:34.780 11:18:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.780 11:18:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.780 11:18:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:34.780 11:18:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:34.780 11:18:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:34.781 11:18:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:34.781 11:18:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:34.781 11:18:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.781 11:18:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:34.781 11:18:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:34.781 11:18:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:34.781 11:18:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:34.781 11:18:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:34.781 11:18:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:34.781 Cannot find device "nvmf_tgt_br" 00:27:34.781 11:18:42 -- nvmf/common.sh@155 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:34.781 Cannot find device "nvmf_tgt_br2" 00:27:34.781 11:18:42 -- nvmf/common.sh@156 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:34.781 11:18:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:34.781 Cannot find device "nvmf_tgt_br" 00:27:34.781 11:18:42 -- nvmf/common.sh@158 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:34.781 Cannot find device "nvmf_tgt_br2" 00:27:34.781 11:18:42 -- nvmf/common.sh@159 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:34.781 11:18:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:34.781 11:18:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:34.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.781 11:18:42 -- nvmf/common.sh@162 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:34.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.781 11:18:42 -- nvmf/common.sh@163 -- # true 00:27:34.781 11:18:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:34.781 11:18:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:34.781 11:18:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:34.781 11:18:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:34.781 11:18:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:34.781 11:18:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:35.039 11:18:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:35.039 11:18:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:35.039 11:18:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:35.039 11:18:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:35.039 11:18:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:35.039 11:18:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:35.039 11:18:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:35.039 11:18:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:35.039 11:18:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:35.039 11:18:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:35.039 11:18:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:35.039 11:18:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:35.039 11:18:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:35.039 11:18:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:35.039 11:18:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:35.039 11:18:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:35.039 11:18:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:35.039 11:18:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:35.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:27:35.039 00:27:35.039 --- 10.0.0.2 ping statistics --- 00:27:35.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.040 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:27:35.040 11:18:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:35.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:35.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:27:35.040 00:27:35.040 --- 10.0.0.3 ping statistics --- 00:27:35.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.040 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:27:35.040 11:18:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:35.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:35.040 00:27:35.040 --- 10.0.0.1 ping statistics --- 00:27:35.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.040 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:35.040 11:18:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.040 11:18:43 -- nvmf/common.sh@422 -- # return 0 00:27:35.040 11:18:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:35.040 11:18:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.040 11:18:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.040 11:18:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:35.040 11:18:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:35.040 11:18:43 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:35.040 11:18:43 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:35.040 11:18:43 -- nvmf/common.sh@717 -- # local ip 00:27:35.040 11:18:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:35.040 11:18:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:35.040 11:18:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.040 11:18:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.040 11:18:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:35.040 11:18:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:35.040 11:18:43 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:35.040 11:18:43 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:35.040 11:18:43 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:35.040 11:18:43 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:35.040 11:18:43 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:35.040 11:18:43 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:35.040 11:18:43 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:35.040 11:18:43 -- nvmf/common.sh@628 -- # local block nvme 00:27:35.040 11:18:43 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:35.040 11:18:43 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:35.040 11:18:43 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:35.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:35.556 Waiting for block devices as requested 00:27:35.556 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:35.556 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:35.556 11:18:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:35.556 11:18:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:35.556 11:18:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:35.556 11:18:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:35.556 11:18:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:35.556 11:18:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.556 11:18:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:35.556 11:18:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:35.556 11:18:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:35.814 No valid GPT data, bailing 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # pt= 00:27:35.814 11:18:43 -- scripts/common.sh@392 -- # return 1 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:35.814 11:18:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:35.814 11:18:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:27:35.814 11:18:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:35.814 11:18:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:35.814 11:18:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:27:35.814 11:18:43 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:35.814 11:18:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:35.814 No valid GPT data, bailing 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # pt= 00:27:35.814 11:18:43 -- scripts/common.sh@392 -- # return 1 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:27:35.814 11:18:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:35.814 11:18:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:27:35.814 11:18:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:35.814 11:18:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:35.814 11:18:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:27:35.814 11:18:43 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:35.814 11:18:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:35.814 No valid GPT data, bailing 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:35.814 11:18:43 -- scripts/common.sh@391 -- # pt= 00:27:35.814 11:18:43 -- scripts/common.sh@392 -- # return 1 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:27:35.814 11:18:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:35.814 11:18:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:27:35.814 11:18:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:35.814 11:18:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:35.814 11:18:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.814 11:18:43 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:27:35.814 11:18:43 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:35.814 11:18:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:35.814 No valid GPT data, bailing 00:27:36.072 11:18:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:36.072 11:18:44 -- scripts/common.sh@391 -- # pt= 00:27:36.072 11:18:44 -- scripts/common.sh@392 -- # return 1 00:27:36.072 11:18:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:27:36.072 11:18:44 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:27:36.072 11:18:44 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.072 11:18:44 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:36.072 11:18:44 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:36.072 11:18:44 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:36.072 11:18:44 -- nvmf/common.sh@656 -- # echo 1 00:27:36.072 11:18:44 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:27:36.072 11:18:44 -- nvmf/common.sh@658 -- # echo 1 00:27:36.072 11:18:44 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:36.072 11:18:44 -- nvmf/common.sh@661 -- # echo tcp 00:27:36.072 11:18:44 -- nvmf/common.sh@662 -- # echo 4420 00:27:36.072 11:18:44 -- nvmf/common.sh@663 -- # echo ipv4 00:27:36.072 11:18:44 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:36.072 11:18:44 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.1 -t tcp -s 4420 00:27:36.072 00:27:36.072 Discovery Log Number of Records 2, Generation counter 2 00:27:36.072 =====Discovery Log Entry 0====== 00:27:36.072 trtype: tcp 00:27:36.072 adrfam: ipv4 00:27:36.072 subtype: current discovery subsystem 00:27:36.072 treq: not specified, sq flow control disable supported 00:27:36.072 portid: 1 00:27:36.072 trsvcid: 4420 00:27:36.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:36.072 traddr: 10.0.0.1 00:27:36.072 eflags: none 00:27:36.072 sectype: none 00:27:36.072 =====Discovery Log Entry 1====== 00:27:36.072 trtype: tcp 00:27:36.072 adrfam: ipv4 00:27:36.072 subtype: nvme subsystem 00:27:36.072 treq: not specified, sq flow control disable supported 00:27:36.072 portid: 1 00:27:36.072 trsvcid: 4420 00:27:36.072 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:36.072 traddr: 10.0.0.1 00:27:36.072 eflags: none 00:27:36.072 sectype: none 00:27:36.072 11:18:44 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:36.072 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:36.331 ===================================================== 00:27:36.331 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:36.331 ===================================================== 00:27:36.331 Controller Capabilities/Features 00:27:36.331 ================================ 00:27:36.331 Vendor ID: 0000 00:27:36.331 Subsystem Vendor ID: 0000 00:27:36.331 Serial Number: c220c55ba3440178c9eb 00:27:36.331 Model Number: Linux 00:27:36.331 Firmware Version: 6.7.0-68 00:27:36.331 Recommended Arb Burst: 0 00:27:36.331 IEEE OUI Identifier: 00 00 00 00:27:36.331 Multi-path I/O 00:27:36.331 May have multiple subsystem ports: No 00:27:36.331 May have multiple controllers: No 00:27:36.331 Associated with SR-IOV VF: No 00:27:36.331 Max Data Transfer Size: Unlimited 00:27:36.331 Max Number of Namespaces: 0 00:27:36.331 Max Number of I/O Queues: 1024 00:27:36.331 NVMe Specification Version (VS): 1.3 00:27:36.331 NVMe Specification Version (Identify): 1.3 00:27:36.331 Maximum Queue Entries: 1024 00:27:36.331 Contiguous Queues Required: No 00:27:36.331 Arbitration Mechanisms Supported 00:27:36.331 Weighted Round Robin: Not Supported 00:27:36.331 Vendor Specific: Not Supported 00:27:36.331 Reset Timeout: 7500 ms 00:27:36.331 Doorbell Stride: 4 bytes 00:27:36.331 NVM Subsystem Reset: Not Supported 00:27:36.331 Command Sets Supported 00:27:36.331 NVM Command Set: Supported 00:27:36.331 Boot Partition: Not Supported 00:27:36.331 Memory Page Size Minimum: 4096 bytes 00:27:36.331 Memory Page Size Maximum: 4096 bytes 00:27:36.331 Persistent Memory Region: Not Supported 00:27:36.331 Optional Asynchronous Events Supported 00:27:36.331 Namespace Attribute Notices: Not Supported 00:27:36.331 Firmware Activation Notices: Not Supported 00:27:36.331 ANA Change Notices: Not Supported 00:27:36.331 PLE Aggregate Log Change Notices: Not Supported 00:27:36.331 LBA Status Info Alert Notices: Not Supported 00:27:36.331 EGE Aggregate Log Change Notices: Not Supported 00:27:36.331 Normal NVM Subsystem Shutdown event: Not Supported 00:27:36.331 Zone Descriptor Change Notices: Not Supported 00:27:36.331 Discovery Log Change Notices: Supported 00:27:36.331 Controller Attributes 00:27:36.331 128-bit Host Identifier: Not Supported 00:27:36.331 Non-Operational Permissive Mode: Not Supported 00:27:36.331 NVM Sets: Not Supported 00:27:36.331 Read Recovery Levels: Not Supported 00:27:36.331 Endurance Groups: Not Supported 00:27:36.331 Predictable Latency Mode: Not Supported 00:27:36.331 Traffic Based Keep ALive: Not Supported 00:27:36.331 Namespace Granularity: Not Supported 00:27:36.331 SQ Associations: Not Supported 00:27:36.331 UUID List: Not Supported 00:27:36.331 Multi-Domain Subsystem: Not Supported 00:27:36.331 Fixed Capacity Management: Not Supported 00:27:36.331 Variable Capacity Management: Not Supported 00:27:36.331 Delete Endurance Group: Not Supported 00:27:36.331 Delete NVM Set: Not Supported 00:27:36.331 Extended LBA Formats Supported: Not Supported 00:27:36.331 Flexible Data Placement Supported: Not Supported 00:27:36.331 00:27:36.331 Controller Memory Buffer Support 00:27:36.331 ================================ 00:27:36.331 Supported: No 00:27:36.331 00:27:36.331 Persistent Memory Region Support 00:27:36.331 ================================ 00:27:36.331 Supported: No 00:27:36.331 00:27:36.331 Admin Command Set Attributes 00:27:36.331 ============================ 00:27:36.331 Security Send/Receive: Not Supported 00:27:36.331 Format NVM: Not Supported 00:27:36.331 Firmware Activate/Download: Not Supported 00:27:36.331 Namespace Management: Not Supported 00:27:36.331 Device Self-Test: Not Supported 00:27:36.331 Directives: Not Supported 00:27:36.331 NVMe-MI: Not Supported 00:27:36.331 Virtualization Management: Not Supported 00:27:36.331 Doorbell Buffer Config: Not Supported 00:27:36.331 Get LBA Status Capability: Not Supported 00:27:36.331 Command & Feature Lockdown Capability: Not Supported 00:27:36.331 Abort Command Limit: 1 00:27:36.331 Async Event Request Limit: 1 00:27:36.331 Number of Firmware Slots: N/A 00:27:36.331 Firmware Slot 1 Read-Only: N/A 00:27:36.331 Firmware Activation Without Reset: N/A 00:27:36.331 Multiple Update Detection Support: N/A 00:27:36.331 Firmware Update Granularity: No Information Provided 00:27:36.331 Per-Namespace SMART Log: No 00:27:36.331 Asymmetric Namespace Access Log Page: Not Supported 00:27:36.331 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:36.331 Command Effects Log Page: Not Supported 00:27:36.331 Get Log Page Extended Data: Supported 00:27:36.331 Telemetry Log Pages: Not Supported 00:27:36.331 Persistent Event Log Pages: Not Supported 00:27:36.331 Supported Log Pages Log Page: May Support 00:27:36.331 Commands Supported & Effects Log Page: Not Supported 00:27:36.331 Feature Identifiers & Effects Log Page:May Support 00:27:36.331 NVMe-MI Commands & Effects Log Page: May Support 00:27:36.331 Data Area 4 for Telemetry Log: Not Supported 00:27:36.331 Error Log Page Entries Supported: 1 00:27:36.331 Keep Alive: Not Supported 00:27:36.331 00:27:36.331 NVM Command Set Attributes 00:27:36.331 ========================== 00:27:36.331 Submission Queue Entry Size 00:27:36.331 Max: 1 00:27:36.331 Min: 1 00:27:36.331 Completion Queue Entry Size 00:27:36.331 Max: 1 00:27:36.331 Min: 1 00:27:36.331 Number of Namespaces: 0 00:27:36.331 Compare Command: Not Supported 00:27:36.331 Write Uncorrectable Command: Not Supported 00:27:36.331 Dataset Management Command: Not Supported 00:27:36.331 Write Zeroes Command: Not Supported 00:27:36.331 Set Features Save Field: Not Supported 00:27:36.331 Reservations: Not Supported 00:27:36.331 Timestamp: Not Supported 00:27:36.331 Copy: Not Supported 00:27:36.331 Volatile Write Cache: Not Present 00:27:36.332 Atomic Write Unit (Normal): 1 00:27:36.332 Atomic Write Unit (PFail): 1 00:27:36.332 Atomic Compare & Write Unit: 1 00:27:36.332 Fused Compare & Write: Not Supported 00:27:36.332 Scatter-Gather List 00:27:36.332 SGL Command Set: Supported 00:27:36.332 SGL Keyed: Not Supported 00:27:36.332 SGL Bit Bucket Descriptor: Not Supported 00:27:36.332 SGL Metadata Pointer: Not Supported 00:27:36.332 Oversized SGL: Not Supported 00:27:36.332 SGL Metadata Address: Not Supported 00:27:36.332 SGL Offset: Supported 00:27:36.332 Transport SGL Data Block: Not Supported 00:27:36.332 Replay Protected Memory Block: Not Supported 00:27:36.332 00:27:36.332 Firmware Slot Information 00:27:36.332 ========================= 00:27:36.332 Active slot: 0 00:27:36.332 00:27:36.332 00:27:36.332 Error Log 00:27:36.332 ========= 00:27:36.332 00:27:36.332 Active Namespaces 00:27:36.332 ================= 00:27:36.332 Discovery Log Page 00:27:36.332 ================== 00:27:36.332 Generation Counter: 2 00:27:36.332 Number of Records: 2 00:27:36.332 Record Format: 0 00:27:36.332 00:27:36.332 Discovery Log Entry 0 00:27:36.332 ---------------------- 00:27:36.332 Transport Type: 3 (TCP) 00:27:36.332 Address Family: 1 (IPv4) 00:27:36.332 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:36.332 Entry Flags: 00:27:36.332 Duplicate Returned Information: 0 00:27:36.332 Explicit Persistent Connection Support for Discovery: 0 00:27:36.332 Transport Requirements: 00:27:36.332 Secure Channel: Not Specified 00:27:36.332 Port ID: 1 (0x0001) 00:27:36.332 Controller ID: 65535 (0xffff) 00:27:36.332 Admin Max SQ Size: 32 00:27:36.332 Transport Service Identifier: 4420 00:27:36.332 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:36.332 Transport Address: 10.0.0.1 00:27:36.332 Discovery Log Entry 1 00:27:36.332 ---------------------- 00:27:36.332 Transport Type: 3 (TCP) 00:27:36.332 Address Family: 1 (IPv4) 00:27:36.332 Subsystem Type: 2 (NVM Subsystem) 00:27:36.332 Entry Flags: 00:27:36.332 Duplicate Returned Information: 0 00:27:36.332 Explicit Persistent Connection Support for Discovery: 0 00:27:36.332 Transport Requirements: 00:27:36.332 Secure Channel: Not Specified 00:27:36.332 Port ID: 1 (0x0001) 00:27:36.332 Controller ID: 65535 (0xffff) 00:27:36.332 Admin Max SQ Size: 32 00:27:36.332 Transport Service Identifier: 4420 00:27:36.332 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:36.332 Transport Address: 10.0.0.1 00:27:36.332 11:18:44 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:36.590 get_feature(0x01) failed 00:27:36.590 get_feature(0x02) failed 00:27:36.590 get_feature(0x04) failed 00:27:36.590 ===================================================== 00:27:36.590 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:36.590 ===================================================== 00:27:36.590 Controller Capabilities/Features 00:27:36.590 ================================ 00:27:36.590 Vendor ID: 0000 00:27:36.590 Subsystem Vendor ID: 0000 00:27:36.590 Serial Number: f831efa46adbc13adfdf 00:27:36.590 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:36.590 Firmware Version: 6.7.0-68 00:27:36.590 Recommended Arb Burst: 6 00:27:36.590 IEEE OUI Identifier: 00 00 00 00:27:36.590 Multi-path I/O 00:27:36.590 May have multiple subsystem ports: Yes 00:27:36.590 May have multiple controllers: Yes 00:27:36.590 Associated with SR-IOV VF: No 00:27:36.590 Max Data Transfer Size: Unlimited 00:27:36.590 Max Number of Namespaces: 1024 00:27:36.590 Max Number of I/O Queues: 128 00:27:36.590 NVMe Specification Version (VS): 1.3 00:27:36.590 NVMe Specification Version (Identify): 1.3 00:27:36.590 Maximum Queue Entries: 1024 00:27:36.590 Contiguous Queues Required: No 00:27:36.590 Arbitration Mechanisms Supported 00:27:36.590 Weighted Round Robin: Not Supported 00:27:36.590 Vendor Specific: Not Supported 00:27:36.590 Reset Timeout: 7500 ms 00:27:36.590 Doorbell Stride: 4 bytes 00:27:36.590 NVM Subsystem Reset: Not Supported 00:27:36.590 Command Sets Supported 00:27:36.590 NVM Command Set: Supported 00:27:36.590 Boot Partition: Not Supported 00:27:36.590 Memory Page Size Minimum: 4096 bytes 00:27:36.590 Memory Page Size Maximum: 4096 bytes 00:27:36.590 Persistent Memory Region: Not Supported 00:27:36.590 Optional Asynchronous Events Supported 00:27:36.590 Namespace Attribute Notices: Supported 00:27:36.590 Firmware Activation Notices: Not Supported 00:27:36.590 ANA Change Notices: Supported 00:27:36.590 PLE Aggregate Log Change Notices: Not Supported 00:27:36.590 LBA Status Info Alert Notices: Not Supported 00:27:36.590 EGE Aggregate Log Change Notices: Not Supported 00:27:36.590 Normal NVM Subsystem Shutdown event: Not Supported 00:27:36.590 Zone Descriptor Change Notices: Not Supported 00:27:36.590 Discovery Log Change Notices: Not Supported 00:27:36.590 Controller Attributes 00:27:36.590 128-bit Host Identifier: Supported 00:27:36.590 Non-Operational Permissive Mode: Not Supported 00:27:36.591 NVM Sets: Not Supported 00:27:36.591 Read Recovery Levels: Not Supported 00:27:36.591 Endurance Groups: Not Supported 00:27:36.591 Predictable Latency Mode: Not Supported 00:27:36.591 Traffic Based Keep ALive: Supported 00:27:36.591 Namespace Granularity: Not Supported 00:27:36.591 SQ Associations: Not Supported 00:27:36.591 UUID List: Not Supported 00:27:36.591 Multi-Domain Subsystem: Not Supported 00:27:36.591 Fixed Capacity Management: Not Supported 00:27:36.591 Variable Capacity Management: Not Supported 00:27:36.591 Delete Endurance Group: Not Supported 00:27:36.591 Delete NVM Set: Not Supported 00:27:36.591 Extended LBA Formats Supported: Not Supported 00:27:36.591 Flexible Data Placement Supported: Not Supported 00:27:36.591 00:27:36.591 Controller Memory Buffer Support 00:27:36.591 ================================ 00:27:36.591 Supported: No 00:27:36.591 00:27:36.591 Persistent Memory Region Support 00:27:36.591 ================================ 00:27:36.591 Supported: No 00:27:36.591 00:27:36.591 Admin Command Set Attributes 00:27:36.591 ============================ 00:27:36.591 Security Send/Receive: Not Supported 00:27:36.591 Format NVM: Not Supported 00:27:36.591 Firmware Activate/Download: Not Supported 00:27:36.591 Namespace Management: Not Supported 00:27:36.591 Device Self-Test: Not Supported 00:27:36.591 Directives: Not Supported 00:27:36.591 NVMe-MI: Not Supported 00:27:36.591 Virtualization Management: Not Supported 00:27:36.591 Doorbell Buffer Config: Not Supported 00:27:36.591 Get LBA Status Capability: Not Supported 00:27:36.591 Command & Feature Lockdown Capability: Not Supported 00:27:36.591 Abort Command Limit: 4 00:27:36.591 Async Event Request Limit: 4 00:27:36.591 Number of Firmware Slots: N/A 00:27:36.591 Firmware Slot 1 Read-Only: N/A 00:27:36.591 Firmware Activation Without Reset: N/A 00:27:36.591 Multiple Update Detection Support: N/A 00:27:36.591 Firmware Update Granularity: No Information Provided 00:27:36.591 Per-Namespace SMART Log: Yes 00:27:36.591 Asymmetric Namespace Access Log Page: Supported 00:27:36.591 ANA Transition Time : 10 sec 00:27:36.591 00:27:36.591 Asymmetric Namespace Access Capabilities 00:27:36.591 ANA Optimized State : Supported 00:27:36.591 ANA Non-Optimized State : Supported 00:27:36.591 ANA Inaccessible State : Supported 00:27:36.591 ANA Persistent Loss State : Supported 00:27:36.591 ANA Change State : Supported 00:27:36.591 ANAGRPID is not changed : No 00:27:36.591 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:36.591 00:27:36.591 ANA Group Identifier Maximum : 128 00:27:36.591 Number of ANA Group Identifiers : 128 00:27:36.591 Max Number of Allowed Namespaces : 1024 00:27:36.591 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:36.591 Command Effects Log Page: Supported 00:27:36.591 Get Log Page Extended Data: Supported 00:27:36.591 Telemetry Log Pages: Not Supported 00:27:36.591 Persistent Event Log Pages: Not Supported 00:27:36.591 Supported Log Pages Log Page: May Support 00:27:36.591 Commands Supported & Effects Log Page: Not Supported 00:27:36.591 Feature Identifiers & Effects Log Page:May Support 00:27:36.591 NVMe-MI Commands & Effects Log Page: May Support 00:27:36.591 Data Area 4 for Telemetry Log: Not Supported 00:27:36.591 Error Log Page Entries Supported: 128 00:27:36.591 Keep Alive: Supported 00:27:36.591 Keep Alive Granularity: 1000 ms 00:27:36.591 00:27:36.591 NVM Command Set Attributes 00:27:36.591 ========================== 00:27:36.591 Submission Queue Entry Size 00:27:36.591 Max: 64 00:27:36.591 Min: 64 00:27:36.591 Completion Queue Entry Size 00:27:36.591 Max: 16 00:27:36.591 Min: 16 00:27:36.591 Number of Namespaces: 1024 00:27:36.591 Compare Command: Not Supported 00:27:36.591 Write Uncorrectable Command: Not Supported 00:27:36.591 Dataset Management Command: Supported 00:27:36.591 Write Zeroes Command: Supported 00:27:36.591 Set Features Save Field: Not Supported 00:27:36.591 Reservations: Not Supported 00:27:36.591 Timestamp: Not Supported 00:27:36.591 Copy: Not Supported 00:27:36.591 Volatile Write Cache: Present 00:27:36.591 Atomic Write Unit (Normal): 1 00:27:36.591 Atomic Write Unit (PFail): 1 00:27:36.591 Atomic Compare & Write Unit: 1 00:27:36.591 Fused Compare & Write: Not Supported 00:27:36.591 Scatter-Gather List 00:27:36.591 SGL Command Set: Supported 00:27:36.591 SGL Keyed: Not Supported 00:27:36.591 SGL Bit Bucket Descriptor: Not Supported 00:27:36.591 SGL Metadata Pointer: Not Supported 00:27:36.591 Oversized SGL: Not Supported 00:27:36.591 SGL Metadata Address: Not Supported 00:27:36.591 SGL Offset: Supported 00:27:36.591 Transport SGL Data Block: Not Supported 00:27:36.591 Replay Protected Memory Block: Not Supported 00:27:36.591 00:27:36.591 Firmware Slot Information 00:27:36.591 ========================= 00:27:36.591 Active slot: 0 00:27:36.591 00:27:36.591 Asymmetric Namespace Access 00:27:36.591 =========================== 00:27:36.591 Change Count : 0 00:27:36.591 Number of ANA Group Descriptors : 1 00:27:36.591 ANA Group Descriptor : 0 00:27:36.591 ANA Group ID : 1 00:27:36.591 Number of NSID Values : 1 00:27:36.591 Change Count : 0 00:27:36.591 ANA State : 1 00:27:36.591 Namespace Identifier : 1 00:27:36.591 00:27:36.591 Commands Supported and Effects 00:27:36.591 ============================== 00:27:36.591 Admin Commands 00:27:36.591 -------------- 00:27:36.591 Get Log Page (02h): Supported 00:27:36.591 Identify (06h): Supported 00:27:36.591 Abort (08h): Supported 00:27:36.591 Set Features (09h): Supported 00:27:36.591 Get Features (0Ah): Supported 00:27:36.591 Asynchronous Event Request (0Ch): Supported 00:27:36.591 Keep Alive (18h): Supported 00:27:36.591 I/O Commands 00:27:36.591 ------------ 00:27:36.591 Flush (00h): Supported 00:27:36.591 Write (01h): Supported LBA-Change 00:27:36.591 Read (02h): Supported 00:27:36.591 Write Zeroes (08h): Supported LBA-Change 00:27:36.591 Dataset Management (09h): Supported 00:27:36.591 00:27:36.591 Error Log 00:27:36.591 ========= 00:27:36.591 Entry: 0 00:27:36.591 Error Count: 0x3 00:27:36.591 Submission Queue Id: 0x0 00:27:36.591 Command Id: 0x5 00:27:36.591 Phase Bit: 0 00:27:36.591 Status Code: 0x2 00:27:36.591 Status Code Type: 0x0 00:27:36.591 Do Not Retry: 1 00:27:36.591 Error Location: 0x28 00:27:36.591 LBA: 0x0 00:27:36.591 Namespace: 0x0 00:27:36.591 Vendor Log Page: 0x0 00:27:36.591 ----------- 00:27:36.591 Entry: 1 00:27:36.591 Error Count: 0x2 00:27:36.591 Submission Queue Id: 0x0 00:27:36.591 Command Id: 0x5 00:27:36.591 Phase Bit: 0 00:27:36.591 Status Code: 0x2 00:27:36.591 Status Code Type: 0x0 00:27:36.591 Do Not Retry: 1 00:27:36.591 Error Location: 0x28 00:27:36.591 LBA: 0x0 00:27:36.591 Namespace: 0x0 00:27:36.591 Vendor Log Page: 0x0 00:27:36.591 ----------- 00:27:36.591 Entry: 2 00:27:36.591 Error Count: 0x1 00:27:36.591 Submission Queue Id: 0x0 00:27:36.591 Command Id: 0x4 00:27:36.591 Phase Bit: 0 00:27:36.591 Status Code: 0x2 00:27:36.591 Status Code Type: 0x0 00:27:36.591 Do Not Retry: 1 00:27:36.591 Error Location: 0x28 00:27:36.591 LBA: 0x0 00:27:36.591 Namespace: 0x0 00:27:36.591 Vendor Log Page: 0x0 00:27:36.591 00:27:36.591 Number of Queues 00:27:36.591 ================ 00:27:36.591 Number of I/O Submission Queues: 128 00:27:36.591 Number of I/O Completion Queues: 128 00:27:36.591 00:27:36.591 ZNS Specific Controller Data 00:27:36.591 ============================ 00:27:36.591 Zone Append Size Limit: 0 00:27:36.591 00:27:36.591 00:27:36.591 Active Namespaces 00:27:36.591 ================= 00:27:36.591 get_feature(0x05) failed 00:27:36.591 Namespace ID:1 00:27:36.591 Command Set Identifier: NVM (00h) 00:27:36.591 Deallocate: Supported 00:27:36.591 Deallocated/Unwritten Error: Not Supported 00:27:36.591 Deallocated Read Value: Unknown 00:27:36.591 Deallocate in Write Zeroes: Not Supported 00:27:36.591 Deallocated Guard Field: 0xFFFF 00:27:36.591 Flush: Supported 00:27:36.591 Reservation: Not Supported 00:27:36.591 Namespace Sharing Capabilities: Multiple Controllers 00:27:36.591 Size (in LBAs): 1310720 (5GiB) 00:27:36.591 Capacity (in LBAs): 1310720 (5GiB) 00:27:36.591 Utilization (in LBAs): 1310720 (5GiB) 00:27:36.591 UUID: 53dd84d3-fd29-4bb3-8072-bc6eacdb7dfb 00:27:36.591 Thin Provisioning: Not Supported 00:27:36.591 Per-NS Atomic Units: Yes 00:27:36.591 Atomic Boundary Size (Normal): 0 00:27:36.591 Atomic Boundary Size (PFail): 0 00:27:36.591 Atomic Boundary Offset: 0 00:27:36.591 NGUID/EUI64 Never Reused: No 00:27:36.592 ANA group ID: 1 00:27:36.592 Namespace Write Protected: No 00:27:36.592 Number of LBA Formats: 1 00:27:36.592 Current LBA Format: LBA Format #00 00:27:36.592 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:27:36.592 00:27:36.592 11:18:44 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:36.592 11:18:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:36.592 11:18:44 -- nvmf/common.sh@117 -- # sync 00:27:36.592 11:18:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.592 11:18:44 -- nvmf/common.sh@120 -- # set +e 00:27:36.592 11:18:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.592 11:18:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.592 rmmod nvme_tcp 00:27:36.592 rmmod nvme_fabrics 00:27:36.592 11:18:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.592 11:18:44 -- nvmf/common.sh@124 -- # set -e 00:27:36.592 11:18:44 -- nvmf/common.sh@125 -- # return 0 00:27:36.592 11:18:44 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:36.592 11:18:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:36.592 11:18:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:36.592 11:18:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:36.592 11:18:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.592 11:18:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.592 11:18:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.592 11:18:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.592 11:18:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.592 11:18:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:36.592 11:18:44 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:36.592 11:18:44 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:36.592 11:18:44 -- nvmf/common.sh@675 -- # echo 0 00:27:36.592 11:18:44 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.592 11:18:44 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:36.592 11:18:44 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:36.849 11:18:44 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.849 11:18:44 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:36.849 11:18:44 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:36.849 11:18:44 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:37.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:37.457 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:37.457 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:37.457 00:27:37.457 real 0m2.944s 00:27:37.457 user 0m0.978s 00:27:37.457 sys 0m1.434s 00:27:37.457 11:18:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:37.457 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:27:37.457 ************************************ 00:27:37.457 END TEST nvmf_identify_kernel_target 00:27:37.457 ************************************ 00:27:37.715 11:18:45 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:37.715 11:18:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:37.715 11:18:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:37.715 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:27:37.715 ************************************ 00:27:37.715 START TEST nvmf_auth 00:27:37.715 ************************************ 00:27:37.715 11:18:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:37.715 * Looking for test storage... 00:27:37.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:37.715 11:18:45 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:37.715 11:18:45 -- nvmf/common.sh@7 -- # uname -s 00:27:37.715 11:18:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.715 11:18:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.715 11:18:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.715 11:18:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.715 11:18:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.715 11:18:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.715 11:18:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.715 11:18:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.715 11:18:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.715 11:18:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.715 11:18:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:37.715 11:18:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:27:37.715 11:18:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.715 11:18:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.715 11:18:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:37.715 11:18:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.715 11:18:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:37.715 11:18:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.715 11:18:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.715 11:18:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.715 11:18:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.715 11:18:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.715 11:18:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.715 11:18:45 -- paths/export.sh@5 -- # export PATH 00:27:37.715 11:18:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.715 11:18:45 -- nvmf/common.sh@47 -- # : 0 00:27:37.715 11:18:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.715 11:18:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.715 11:18:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.715 11:18:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.715 11:18:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.716 11:18:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.716 11:18:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.716 11:18:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.716 11:18:45 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:37.716 11:18:45 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:37.716 11:18:45 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:37.716 11:18:45 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:37.716 11:18:45 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:37.716 11:18:45 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:37.716 11:18:45 -- host/auth.sh@21 -- # keys=() 00:27:37.716 11:18:45 -- host/auth.sh@77 -- # nvmftestinit 00:27:37.716 11:18:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:37.716 11:18:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.716 11:18:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:37.716 11:18:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:37.716 11:18:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:37.716 11:18:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.716 11:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.716 11:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.716 11:18:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:37.716 11:18:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:37.716 11:18:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:37.716 11:18:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:37.716 11:18:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:37.716 11:18:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:37.716 11:18:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.716 11:18:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.716 11:18:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:37.716 11:18:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:37.716 11:18:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:37.716 11:18:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:37.716 11:18:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:37.716 11:18:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.716 11:18:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:37.716 11:18:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:37.716 11:18:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:37.716 11:18:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:37.716 11:18:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:37.716 11:18:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:37.716 Cannot find device "nvmf_tgt_br" 00:27:37.716 11:18:45 -- nvmf/common.sh@155 -- # true 00:27:37.716 11:18:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:37.716 Cannot find device "nvmf_tgt_br2" 00:27:37.716 11:18:45 -- nvmf/common.sh@156 -- # true 00:27:37.716 11:18:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:37.716 11:18:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:37.973 Cannot find device "nvmf_tgt_br" 00:27:37.973 11:18:45 -- nvmf/common.sh@158 -- # true 00:27:37.973 11:18:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:37.973 Cannot find device "nvmf_tgt_br2" 00:27:37.973 11:18:45 -- nvmf/common.sh@159 -- # true 00:27:37.973 11:18:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:37.973 11:18:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:37.973 11:18:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:37.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.973 11:18:45 -- nvmf/common.sh@162 -- # true 00:27:37.973 11:18:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:37.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.973 11:18:45 -- nvmf/common.sh@163 -- # true 00:27:37.973 11:18:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:37.973 11:18:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:37.973 11:18:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:37.973 11:18:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:37.974 11:18:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:37.974 11:18:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:37.974 11:18:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:37.974 11:18:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:37.974 11:18:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:37.974 11:18:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:37.974 11:18:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:37.974 11:18:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:37.974 11:18:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:37.974 11:18:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:37.974 11:18:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:37.974 11:18:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:37.974 11:18:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:37.974 11:18:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:37.974 11:18:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:38.232 11:18:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:38.232 11:18:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:38.232 11:18:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:38.232 11:18:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:38.232 11:18:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:38.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:27:38.232 00:27:38.232 --- 10.0.0.2 ping statistics --- 00:27:38.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.232 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:27:38.232 11:18:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:38.232 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:38.232 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:27:38.232 00:27:38.232 --- 10.0.0.3 ping statistics --- 00:27:38.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.232 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:38.232 11:18:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:38.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:38.232 00:27:38.232 --- 10.0.0.1 ping statistics --- 00:27:38.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.232 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:38.232 11:18:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.232 11:18:46 -- nvmf/common.sh@422 -- # return 0 00:27:38.232 11:18:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:38.232 11:18:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.232 11:18:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:38.232 11:18:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:38.232 11:18:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.232 11:18:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:38.232 11:18:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:38.232 11:18:46 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:27:38.232 11:18:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:38.232 11:18:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:38.232 11:18:46 -- common/autotest_common.sh@10 -- # set +x 00:27:38.232 11:18:46 -- nvmf/common.sh@470 -- # nvmfpid=85934 00:27:38.232 11:18:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:38.232 11:18:46 -- nvmf/common.sh@471 -- # waitforlisten 85934 00:27:38.232 11:18:46 -- common/autotest_common.sh@817 -- # '[' -z 85934 ']' 00:27:38.232 11:18:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.232 11:18:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.232 11:18:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.232 11:18:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.232 11:18:46 -- common/autotest_common.sh@10 -- # set +x 00:27:39.166 11:18:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.166 11:18:47 -- common/autotest_common.sh@850 -- # return 0 00:27:39.166 11:18:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:39.166 11:18:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:39.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:27:39.424 11:18:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.424 11:18:47 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:39.424 11:18:47 -- host/auth.sh@81 -- # gen_key null 32 00:27:39.424 11:18:47 -- host/auth.sh@53 -- # local digest len file key 00:27:39.424 11:18:47 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.424 11:18:47 -- host/auth.sh@54 -- # local -A digests 00:27:39.424 11:18:47 -- host/auth.sh@56 -- # digest=null 00:27:39.424 11:18:47 -- host/auth.sh@56 -- # len=32 00:27:39.424 11:18:47 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:39.424 11:18:47 -- host/auth.sh@57 -- # key=04705db9768577357a9e439b041b16c4 00:27:39.424 11:18:47 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:39.424 11:18:47 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.NRP 00:27:39.425 11:18:47 -- host/auth.sh@59 -- # format_dhchap_key 04705db9768577357a9e439b041b16c4 0 00:27:39.425 11:18:47 -- nvmf/common.sh@708 -- # format_key DHHC-1 04705db9768577357a9e439b041b16c4 0 00:27:39.425 11:18:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # key=04705db9768577357a9e439b041b16c4 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # digest=0 00:27:39.425 11:18:47 -- nvmf/common.sh@694 -- # python - 00:27:39.425 11:18:47 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.NRP 00:27:39.425 11:18:47 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.NRP 00:27:39.425 11:18:47 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.NRP 00:27:39.425 11:18:47 -- host/auth.sh@82 -- # gen_key null 48 00:27:39.425 11:18:47 -- host/auth.sh@53 -- # local digest len file key 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # local -A digests 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # digest=null 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # len=48 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # key=dcc9b2ac2391a8770f5be566eb72f7eead2d51276ff6be49 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.wiP 00:27:39.425 11:18:47 -- host/auth.sh@59 -- # format_dhchap_key dcc9b2ac2391a8770f5be566eb72f7eead2d51276ff6be49 0 00:27:39.425 11:18:47 -- nvmf/common.sh@708 -- # format_key DHHC-1 dcc9b2ac2391a8770f5be566eb72f7eead2d51276ff6be49 0 00:27:39.425 11:18:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # key=dcc9b2ac2391a8770f5be566eb72f7eead2d51276ff6be49 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # digest=0 00:27:39.425 11:18:47 -- nvmf/common.sh@694 -- # python - 00:27:39.425 11:18:47 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.wiP 00:27:39.425 11:18:47 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.wiP 00:27:39.425 11:18:47 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.wiP 00:27:39.425 11:18:47 -- host/auth.sh@83 -- # gen_key sha256 32 00:27:39.425 11:18:47 -- host/auth.sh@53 -- # local digest len file key 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # local -A digests 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # digest=sha256 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # len=32 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # key=6a18d4a40b2defeb4feb76f482fba7f1 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.plr 00:27:39.425 11:18:47 -- host/auth.sh@59 -- # format_dhchap_key 6a18d4a40b2defeb4feb76f482fba7f1 1 00:27:39.425 11:18:47 -- nvmf/common.sh@708 -- # format_key DHHC-1 6a18d4a40b2defeb4feb76f482fba7f1 1 00:27:39.425 11:18:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # key=6a18d4a40b2defeb4feb76f482fba7f1 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # digest=1 00:27:39.425 11:18:47 -- nvmf/common.sh@694 -- # python - 00:27:39.425 11:18:47 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.plr 00:27:39.425 11:18:47 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.plr 00:27:39.425 11:18:47 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.plr 00:27:39.425 11:18:47 -- host/auth.sh@84 -- # gen_key sha384 48 00:27:39.425 11:18:47 -- host/auth.sh@53 -- # local digest len file key 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.425 11:18:47 -- host/auth.sh@54 -- # local -A digests 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # digest=sha384 00:27:39.425 11:18:47 -- host/auth.sh@56 -- # len=48 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:39.425 11:18:47 -- host/auth.sh@57 -- # key=b450fb1c128cc89280a3c0a7ca3d89ee27e6217ab37ff13e 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:27:39.425 11:18:47 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.mUb 00:27:39.425 11:18:47 -- host/auth.sh@59 -- # format_dhchap_key b450fb1c128cc89280a3c0a7ca3d89ee27e6217ab37ff13e 2 00:27:39.425 11:18:47 -- nvmf/common.sh@708 -- # format_key DHHC-1 b450fb1c128cc89280a3c0a7ca3d89ee27e6217ab37ff13e 2 00:27:39.425 11:18:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # key=b450fb1c128cc89280a3c0a7ca3d89ee27e6217ab37ff13e 00:27:39.425 11:18:47 -- nvmf/common.sh@693 -- # digest=2 00:27:39.425 11:18:47 -- nvmf/common.sh@694 -- # python - 00:27:39.683 11:18:47 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.mUb 00:27:39.683 11:18:47 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.mUb 00:27:39.683 11:18:47 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.mUb 00:27:39.683 11:18:47 -- host/auth.sh@85 -- # gen_key sha512 64 00:27:39.683 11:18:47 -- host/auth.sh@53 -- # local digest len file key 00:27:39.683 11:18:47 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.683 11:18:47 -- host/auth.sh@54 -- # local -A digests 00:27:39.683 11:18:47 -- host/auth.sh@56 -- # digest=sha512 00:27:39.683 11:18:47 -- host/auth.sh@56 -- # len=64 00:27:39.683 11:18:47 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:39.683 11:18:47 -- host/auth.sh@57 -- # key=4085c9587d2a2ecbb8af6f0b9b59a193d3afd934c75e825178c409970f43105b 00:27:39.683 11:18:47 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:27:39.683 11:18:47 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.wsp 00:27:39.683 11:18:47 -- host/auth.sh@59 -- # format_dhchap_key 4085c9587d2a2ecbb8af6f0b9b59a193d3afd934c75e825178c409970f43105b 3 00:27:39.683 11:18:47 -- nvmf/common.sh@708 -- # format_key DHHC-1 4085c9587d2a2ecbb8af6f0b9b59a193d3afd934c75e825178c409970f43105b 3 00:27:39.683 11:18:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:39.683 11:18:47 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:39.683 11:18:47 -- nvmf/common.sh@693 -- # key=4085c9587d2a2ecbb8af6f0b9b59a193d3afd934c75e825178c409970f43105b 00:27:39.683 11:18:47 -- nvmf/common.sh@693 -- # digest=3 00:27:39.683 11:18:47 -- nvmf/common.sh@694 -- # python - 00:27:39.683 11:18:47 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.wsp 00:27:39.683 11:18:47 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.wsp 00:27:39.683 11:18:47 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.wsp 00:27:39.683 11:18:47 -- host/auth.sh@87 -- # waitforlisten 85934 00:27:39.683 11:18:47 -- common/autotest_common.sh@817 -- # '[' -z 85934 ']' 00:27:39.683 11:18:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.683 11:18:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:39.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.683 11:18:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.683 11:18:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:39.683 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.942 11:18:48 -- common/autotest_common.sh@850 -- # return 0 00:27:39.942 11:18:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:39.942 11:18:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NRP 00:27:39.942 11:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.942 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.942 11:18:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:39.942 11:18:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wiP 00:27:39.942 11:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.942 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.942 11:18:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:39.942 11:18:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.plr 00:27:39.942 11:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.942 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.942 11:18:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:39.942 11:18:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.mUb 00:27:39.942 11:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.942 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.942 11:18:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:39.942 11:18:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wsp 00:27:39.942 11:18:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.942 11:18:48 -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 11:18:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.942 11:18:48 -- host/auth.sh@92 -- # nvmet_auth_init 00:27:39.942 11:18:48 -- host/auth.sh@35 -- # get_main_ns_ip 00:27:39.942 11:18:48 -- nvmf/common.sh@717 -- # local ip 00:27:39.942 11:18:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:39.942 11:18:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:39.942 11:18:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.942 11:18:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.942 11:18:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:39.942 11:18:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.942 11:18:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:39.942 11:18:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:39.942 11:18:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:39.942 11:18:48 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:39.942 11:18:48 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:39.942 11:18:48 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:39.942 11:18:48 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.942 11:18:48 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:39.942 11:18:48 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:39.942 11:18:48 -- nvmf/common.sh@628 -- # local block nvme 00:27:39.942 11:18:48 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:39.942 11:18:48 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:39.942 11:18:48 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.942 11:18:48 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:40.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:40.458 Waiting for block devices as requested 00:27:40.458 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:40.458 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:41.022 11:18:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:41.022 11:18:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:41.022 11:18:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:41.022 11:18:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:41.022 11:18:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:41.022 11:18:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:41.022 11:18:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:41.022 11:18:49 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:41.022 11:18:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:41.022 No valid GPT data, bailing 00:27:41.022 11:18:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:41.022 11:18:49 -- scripts/common.sh@391 -- # pt= 00:27:41.022 11:18:49 -- scripts/common.sh@392 -- # return 1 00:27:41.022 11:18:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:41.022 11:18:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:41.022 11:18:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:41.022 11:18:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:27:41.022 11:18:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:41.023 11:18:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:41.023 11:18:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:41.023 11:18:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:27:41.023 11:18:49 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:41.023 11:18:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:41.280 No valid GPT data, bailing 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # pt= 00:27:41.280 11:18:49 -- scripts/common.sh@392 -- # return 1 00:27:41.280 11:18:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:27:41.280 11:18:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:41.280 11:18:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:41.280 11:18:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:27:41.280 11:18:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:41.280 11:18:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:41.280 11:18:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:41.280 11:18:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:27:41.280 11:18:49 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:41.280 11:18:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:41.280 No valid GPT data, bailing 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # pt= 00:27:41.280 11:18:49 -- scripts/common.sh@392 -- # return 1 00:27:41.280 11:18:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:27:41.280 11:18:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:41.280 11:18:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:41.280 11:18:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:27:41.280 11:18:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:41.280 11:18:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:41.280 11:18:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:41.280 11:18:49 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:27:41.280 11:18:49 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:41.280 11:18:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:41.280 No valid GPT data, bailing 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:41.280 11:18:49 -- scripts/common.sh@391 -- # pt= 00:27:41.280 11:18:49 -- scripts/common.sh@392 -- # return 1 00:27:41.280 11:18:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:27:41.280 11:18:49 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:27:41.280 11:18:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:41.280 11:18:49 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:41.280 11:18:49 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:41.280 11:18:49 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:41.280 11:18:49 -- nvmf/common.sh@656 -- # echo 1 00:27:41.280 11:18:49 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:27:41.280 11:18:49 -- nvmf/common.sh@658 -- # echo 1 00:27:41.280 11:18:49 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:41.280 11:18:49 -- nvmf/common.sh@661 -- # echo tcp 00:27:41.280 11:18:49 -- nvmf/common.sh@662 -- # echo 4420 00:27:41.280 11:18:49 -- nvmf/common.sh@663 -- # echo ipv4 00:27:41.280 11:18:49 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:41.280 11:18:49 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.1 -t tcp -s 4420 00:27:41.537 00:27:41.537 Discovery Log Number of Records 2, Generation counter 2 00:27:41.537 =====Discovery Log Entry 0====== 00:27:41.537 trtype: tcp 00:27:41.537 adrfam: ipv4 00:27:41.537 subtype: current discovery subsystem 00:27:41.537 treq: not specified, sq flow control disable supported 00:27:41.537 portid: 1 00:27:41.537 trsvcid: 4420 00:27:41.537 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:41.537 traddr: 10.0.0.1 00:27:41.537 eflags: none 00:27:41.537 sectype: none 00:27:41.537 =====Discovery Log Entry 1====== 00:27:41.537 trtype: tcp 00:27:41.537 adrfam: ipv4 00:27:41.537 subtype: nvme subsystem 00:27:41.537 treq: not specified, sq flow control disable supported 00:27:41.537 portid: 1 00:27:41.537 trsvcid: 4420 00:27:41.537 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:41.537 traddr: 10.0.0.1 00:27:41.537 eflags: none 00:27:41.537 sectype: none 00:27:41.537 11:18:49 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:41.537 11:18:49 -- host/auth.sh@37 -- # echo 0 00:27:41.538 11:18:49 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:41.538 11:18:49 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.538 11:18:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:41.538 11:18:49 -- host/auth.sh@44 -- # digest=sha256 00:27:41.538 11:18:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.538 11:18:49 -- host/auth.sh@44 -- # keyid=1 00:27:41.538 11:18:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:41.538 11:18:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:41.538 11:18:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:41.538 11:18:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:41.538 11:18:49 -- host/auth.sh@100 -- # IFS=, 00:27:41.538 11:18:49 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:27:41.538 11:18:49 -- host/auth.sh@100 -- # IFS=, 00:27:41.538 11:18:49 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:41.538 11:18:49 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:41.538 11:18:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:41.538 11:18:49 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:27:41.538 11:18:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:41.538 11:18:49 -- host/auth.sh@68 -- # keyid=1 00:27:41.538 11:18:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:41.538 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.538 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.538 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.538 11:18:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:41.538 11:18:49 -- nvmf/common.sh@717 -- # local ip 00:27:41.538 11:18:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:41.538 11:18:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:41.538 11:18:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.538 11:18:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.538 11:18:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:41.538 11:18:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.538 11:18:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:41.538 11:18:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:41.538 11:18:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:41.538 11:18:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:41.538 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.538 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.796 nvme0n1 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:41.797 11:18:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.797 11:18:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:41.797 11:18:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:41.797 11:18:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # digest=sha256 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # keyid=0 00:27:41.797 11:18:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:41.797 11:18:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:41.797 11:18:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:41.797 11:18:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:41.797 11:18:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:27:41.797 11:18:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:41.797 11:18:49 -- host/auth.sh@68 -- # digest=sha256 00:27:41.797 11:18:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:41.797 11:18:49 -- host/auth.sh@68 -- # keyid=0 00:27:41.797 11:18:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:41.797 11:18:49 -- nvmf/common.sh@717 -- # local ip 00:27:41.797 11:18:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:41.797 11:18:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:41.797 11:18:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.797 11:18:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.797 11:18:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:41.797 11:18:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.797 11:18:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:41.797 11:18:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:41.797 11:18:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:41.797 11:18:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 nvme0n1 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.797 11:18:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:41.797 11:18:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.797 11:18:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # digest=sha256 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.797 11:18:49 -- host/auth.sh@44 -- # keyid=1 00:27:41.797 11:18:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:41.797 11:18:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:41.797 11:18:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:41.797 11:18:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:41.797 11:18:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:27:41.797 11:18:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:41.797 11:18:50 -- host/auth.sh@68 -- # digest=sha256 00:27:41.797 11:18:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:41.797 11:18:50 -- host/auth.sh@68 -- # keyid=1 00:27:41.797 11:18:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.797 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.797 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:41.797 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.797 11:18:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:41.797 11:18:50 -- nvmf/common.sh@717 -- # local ip 00:27:41.797 11:18:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:41.797 11:18:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:41.797 11:18:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.797 11:18:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.797 11:18:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:41.797 11:18:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.797 11:18:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:41.797 11:18:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:41.797 11:18:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:41.797 11:18:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:41.797 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.055 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 nvme0n1 00:27:42.055 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.055 11:18:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.055 11:18:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:42.055 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.055 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.055 11:18:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.055 11:18:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.055 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.055 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.055 11:18:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:42.055 11:18:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:42.055 11:18:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:42.055 11:18:50 -- host/auth.sh@44 -- # digest=sha256 00:27:42.055 11:18:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.055 11:18:50 -- host/auth.sh@44 -- # keyid=2 00:27:42.055 11:18:50 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:42.055 11:18:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:42.055 11:18:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:42.055 11:18:50 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:42.055 11:18:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:27:42.055 11:18:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:42.055 11:18:50 -- host/auth.sh@68 -- # digest=sha256 00:27:42.055 11:18:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:42.055 11:18:50 -- host/auth.sh@68 -- # keyid=2 00:27:42.055 11:18:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.055 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.055 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.055 11:18:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:42.056 11:18:50 -- nvmf/common.sh@717 -- # local ip 00:27:42.056 11:18:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:42.056 11:18:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:42.056 11:18:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.056 11:18:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.056 11:18:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:42.056 11:18:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.056 11:18:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:42.056 11:18:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:42.056 11:18:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:42.056 11:18:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:42.056 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.056 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.313 nvme0n1 00:27:42.313 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.313 11:18:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.314 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.314 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.314 11:18:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:42.314 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.314 11:18:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.314 11:18:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.314 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.314 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.314 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.314 11:18:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:42.314 11:18:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:42.314 11:18:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:42.314 11:18:50 -- host/auth.sh@44 -- # digest=sha256 00:27:42.314 11:18:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.314 11:18:50 -- host/auth.sh@44 -- # keyid=3 00:27:42.314 11:18:50 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:42.314 11:18:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:42.314 11:18:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:42.314 11:18:50 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:42.314 11:18:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:27:42.314 11:18:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:42.314 11:18:50 -- host/auth.sh@68 -- # digest=sha256 00:27:42.314 11:18:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:42.314 11:18:50 -- host/auth.sh@68 -- # keyid=3 00:27:42.314 11:18:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.314 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.314 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.314 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.314 11:18:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:42.314 11:18:50 -- nvmf/common.sh@717 -- # local ip 00:27:42.314 11:18:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:42.314 11:18:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:42.314 11:18:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.314 11:18:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.314 11:18:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:42.314 11:18:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.314 11:18:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:42.314 11:18:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:42.314 11:18:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:42.314 11:18:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:42.314 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.314 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.314 nvme0n1 00:27:42.314 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.314 11:18:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.314 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.314 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.314 11:18:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:42.314 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.572 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.572 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.572 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:42.572 11:18:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:42.572 11:18:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # digest=sha256 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # keyid=4 00:27:42.572 11:18:50 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:42.572 11:18:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:42.572 11:18:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:42.572 11:18:50 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:42.572 11:18:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:27:42.572 11:18:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:42.572 11:18:50 -- host/auth.sh@68 -- # digest=sha256 00:27:42.572 11:18:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:42.572 11:18:50 -- host/auth.sh@68 -- # keyid=4 00:27:42.572 11:18:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.572 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.572 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.572 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:42.572 11:18:50 -- nvmf/common.sh@717 -- # local ip 00:27:42.572 11:18:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:42.572 11:18:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:42.572 11:18:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.572 11:18:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.572 11:18:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:42.572 11:18:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.572 11:18:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:42.572 11:18:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:42.572 11:18:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:42.572 11:18:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.572 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.572 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.572 nvme0n1 00:27:42.572 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.572 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.572 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.572 11:18:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:42.572 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.572 11:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.572 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:27:42.572 11:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.572 11:18:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.572 11:18:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:42.572 11:18:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:42.572 11:18:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # digest=sha256 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.572 11:18:50 -- host/auth.sh@44 -- # keyid=0 00:27:42.572 11:18:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:42.572 11:18:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:42.572 11:18:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:43.138 11:18:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:43.138 11:18:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:27:43.138 11:18:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # digest=sha256 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # keyid=0 00:27:43.138 11:18:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.138 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.138 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:43.138 11:18:51 -- nvmf/common.sh@717 -- # local ip 00:27:43.138 11:18:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:43.138 11:18:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:43.138 11:18:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:43.138 11:18:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.138 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.138 nvme0n1 00:27:43.138 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.138 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.138 11:18:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:43.138 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.138 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.138 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:43.138 11:18:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:43.138 11:18:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:43.138 11:18:51 -- host/auth.sh@44 -- # digest=sha256 00:27:43.138 11:18:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.138 11:18:51 -- host/auth.sh@44 -- # keyid=1 00:27:43.138 11:18:51 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:43.138 11:18:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:43.138 11:18:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:43.138 11:18:51 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:43.138 11:18:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:27:43.138 11:18:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # digest=sha256 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:43.138 11:18:51 -- host/auth.sh@68 -- # keyid=1 00:27:43.138 11:18:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.138 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.138 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.138 11:18:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:43.138 11:18:51 -- nvmf/common.sh@717 -- # local ip 00:27:43.138 11:18:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:43.138 11:18:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:43.138 11:18:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:43.138 11:18:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:43.138 11:18:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:43.138 11:18:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:43.138 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.139 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 nvme0n1 00:27:43.397 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.397 11:18:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.397 11:18:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:43.397 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.397 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.397 11:18:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.397 11:18:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.397 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.397 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.397 11:18:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:43.397 11:18:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:43.397 11:18:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:43.397 11:18:51 -- host/auth.sh@44 -- # digest=sha256 00:27:43.397 11:18:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.397 11:18:51 -- host/auth.sh@44 -- # keyid=2 00:27:43.397 11:18:51 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:43.397 11:18:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:43.397 11:18:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:43.397 11:18:51 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:43.397 11:18:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:27:43.397 11:18:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:43.397 11:18:51 -- host/auth.sh@68 -- # digest=sha256 00:27:43.397 11:18:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:43.397 11:18:51 -- host/auth.sh@68 -- # keyid=2 00:27:43.397 11:18:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.397 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.397 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.397 11:18:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:43.397 11:18:51 -- nvmf/common.sh@717 -- # local ip 00:27:43.397 11:18:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:43.397 11:18:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:43.398 11:18:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.398 11:18:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.398 11:18:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:43.398 11:18:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.398 11:18:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:43.398 11:18:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:43.398 11:18:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:43.398 11:18:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.398 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.398 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.398 nvme0n1 00:27:43.398 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.398 11:18:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.398 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.398 11:18:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:43.398 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.655 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.655 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.655 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.655 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:43.655 11:18:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:43.655 11:18:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:43.655 11:18:51 -- host/auth.sh@44 -- # digest=sha256 00:27:43.655 11:18:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.655 11:18:51 -- host/auth.sh@44 -- # keyid=3 00:27:43.655 11:18:51 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:43.655 11:18:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:43.655 11:18:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:43.655 11:18:51 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:43.655 11:18:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:27:43.655 11:18:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:43.655 11:18:51 -- host/auth.sh@68 -- # digest=sha256 00:27:43.655 11:18:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:43.655 11:18:51 -- host/auth.sh@68 -- # keyid=3 00:27:43.655 11:18:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.655 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.655 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.655 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:43.655 11:18:51 -- nvmf/common.sh@717 -- # local ip 00:27:43.655 11:18:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:43.655 11:18:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:43.655 11:18:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.655 11:18:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.655 11:18:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:43.655 11:18:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.655 11:18:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:43.655 11:18:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:43.655 11:18:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:43.655 11:18:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:43.655 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.655 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.655 nvme0n1 00:27:43.655 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.655 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.655 11:18:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:43.655 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.655 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.655 11:18:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.655 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.655 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.914 11:18:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:43.914 11:18:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:43.914 11:18:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:43.914 11:18:51 -- host/auth.sh@44 -- # digest=sha256 00:27:43.914 11:18:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.914 11:18:51 -- host/auth.sh@44 -- # keyid=4 00:27:43.914 11:18:51 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:43.914 11:18:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:43.914 11:18:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:43.914 11:18:51 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:43.914 11:18:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:27:43.914 11:18:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:43.914 11:18:51 -- host/auth.sh@68 -- # digest=sha256 00:27:43.914 11:18:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:43.914 11:18:51 -- host/auth.sh@68 -- # keyid=4 00:27:43.914 11:18:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.914 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.914 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 11:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.914 11:18:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:43.914 11:18:51 -- nvmf/common.sh@717 -- # local ip 00:27:43.914 11:18:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:43.914 11:18:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:43.914 11:18:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.914 11:18:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.914 11:18:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:43.914 11:18:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.914 11:18:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:43.914 11:18:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:43.914 11:18:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:43.914 11:18:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.914 11:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.914 11:18:51 -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 nvme0n1 00:27:43.914 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.914 11:18:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.914 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.914 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 11:18:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:43.914 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.914 11:18:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.914 11:18:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.914 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.914 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:43.914 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.914 11:18:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.914 11:18:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:43.914 11:18:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:43.914 11:18:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:43.914 11:18:52 -- host/auth.sh@44 -- # digest=sha256 00:27:43.914 11:18:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.914 11:18:52 -- host/auth.sh@44 -- # keyid=0 00:27:43.914 11:18:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:43.914 11:18:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:43.914 11:18:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:44.850 11:18:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:44.850 11:18:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:27:44.850 11:18:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # digest=sha256 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # keyid=0 00:27:44.850 11:18:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:44.850 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:44.850 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.850 11:18:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:44.850 11:18:52 -- nvmf/common.sh@717 -- # local ip 00:27:44.850 11:18:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:44.850 11:18:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:44.850 11:18:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.850 11:18:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.850 11:18:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:44.850 11:18:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.850 11:18:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:44.850 11:18:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:44.850 11:18:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:44.850 11:18:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:44.850 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:44.850 nvme0n1 00:27:44.850 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.850 11:18:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.850 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:44.850 11:18:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:44.850 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.850 11:18:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.850 11:18:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.850 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:44.850 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.850 11:18:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:44.850 11:18:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:44.850 11:18:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:44.850 11:18:52 -- host/auth.sh@44 -- # digest=sha256 00:27:44.850 11:18:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.850 11:18:52 -- host/auth.sh@44 -- # keyid=1 00:27:44.850 11:18:52 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:44.850 11:18:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:44.850 11:18:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:44.850 11:18:52 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:44.850 11:18:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:27:44.850 11:18:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # digest=sha256 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:44.850 11:18:52 -- host/auth.sh@68 -- # keyid=1 00:27:44.850 11:18:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:44.850 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:44.850 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.850 11:18:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:44.850 11:18:53 -- nvmf/common.sh@717 -- # local ip 00:27:44.850 11:18:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:44.850 11:18:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:44.850 11:18:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.850 11:18:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.850 11:18:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:44.850 11:18:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.850 11:18:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:44.850 11:18:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:44.850 11:18:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:44.850 11:18:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:44.850 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.850 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.108 nvme0n1 00:27:45.108 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.108 11:18:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.108 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.108 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.108 11:18:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:45.108 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.108 11:18:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.108 11:18:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.108 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.108 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.108 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.108 11:18:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:45.108 11:18:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:45.108 11:18:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:45.108 11:18:53 -- host/auth.sh@44 -- # digest=sha256 00:27:45.108 11:18:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.108 11:18:53 -- host/auth.sh@44 -- # keyid=2 00:27:45.108 11:18:53 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:45.108 11:18:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:45.108 11:18:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:45.109 11:18:53 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:45.109 11:18:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:27:45.109 11:18:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:45.109 11:18:53 -- host/auth.sh@68 -- # digest=sha256 00:27:45.109 11:18:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:45.109 11:18:53 -- host/auth.sh@68 -- # keyid=2 00:27:45.109 11:18:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.109 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.109 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.109 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.109 11:18:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:45.109 11:18:53 -- nvmf/common.sh@717 -- # local ip 00:27:45.109 11:18:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:45.109 11:18:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:45.109 11:18:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.109 11:18:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.109 11:18:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:45.109 11:18:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.109 11:18:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:45.109 11:18:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:45.109 11:18:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:45.109 11:18:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:45.109 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.109 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.365 nvme0n1 00:27:45.365 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.365 11:18:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.365 11:18:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:45.365 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.365 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.365 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.365 11:18:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.365 11:18:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.365 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.365 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.365 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.365 11:18:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:45.365 11:18:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:45.365 11:18:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:45.365 11:18:53 -- host/auth.sh@44 -- # digest=sha256 00:27:45.365 11:18:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.365 11:18:53 -- host/auth.sh@44 -- # keyid=3 00:27:45.365 11:18:53 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:45.365 11:18:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:45.365 11:18:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:45.365 11:18:53 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:45.365 11:18:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:27:45.365 11:18:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:45.365 11:18:53 -- host/auth.sh@68 -- # digest=sha256 00:27:45.365 11:18:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:45.365 11:18:53 -- host/auth.sh@68 -- # keyid=3 00:27:45.365 11:18:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.365 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.365 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.365 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.365 11:18:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:45.365 11:18:53 -- nvmf/common.sh@717 -- # local ip 00:27:45.365 11:18:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:45.365 11:18:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:45.365 11:18:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.365 11:18:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.365 11:18:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:45.365 11:18:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.365 11:18:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:45.365 11:18:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:45.365 11:18:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:45.365 11:18:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:45.365 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.365 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.621 nvme0n1 00:27:45.621 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.621 11:18:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.621 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.621 11:18:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:45.621 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.621 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.621 11:18:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.621 11:18:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.621 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.621 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.621 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.621 11:18:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:45.621 11:18:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:45.621 11:18:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:45.621 11:18:53 -- host/auth.sh@44 -- # digest=sha256 00:27:45.621 11:18:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.621 11:18:53 -- host/auth.sh@44 -- # keyid=4 00:27:45.622 11:18:53 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:45.622 11:18:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:45.622 11:18:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:45.622 11:18:53 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:45.622 11:18:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:45.622 11:18:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:45.622 11:18:53 -- host/auth.sh@68 -- # digest=sha256 00:27:45.622 11:18:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:45.622 11:18:53 -- host/auth.sh@68 -- # keyid=4 00:27:45.622 11:18:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.622 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.622 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.622 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.622 11:18:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:45.622 11:18:53 -- nvmf/common.sh@717 -- # local ip 00:27:45.622 11:18:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:45.622 11:18:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:45.622 11:18:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.622 11:18:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.622 11:18:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:45.622 11:18:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.622 11:18:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:45.622 11:18:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:45.622 11:18:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:45.622 11:18:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.622 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.622 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:27:45.879 nvme0n1 00:27:45.879 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.879 11:18:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.879 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.879 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:27:45.879 11:18:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:45.879 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.879 11:18:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.879 11:18:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.879 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.879 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:27:45.879 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.879 11:18:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.879 11:18:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:45.879 11:18:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:45.879 11:18:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:45.879 11:18:54 -- host/auth.sh@44 -- # digest=sha256 00:27:45.879 11:18:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.879 11:18:54 -- host/auth.sh@44 -- # keyid=0 00:27:45.879 11:18:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:45.879 11:18:54 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:45.879 11:18:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:47.777 11:18:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:47.777 11:18:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:27:47.777 11:18:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:47.777 11:18:55 -- host/auth.sh@68 -- # digest=sha256 00:27:47.777 11:18:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:47.777 11:18:55 -- host/auth.sh@68 -- # keyid=0 00:27:47.777 11:18:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:47.777 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.777 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:27:47.777 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.777 11:18:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:47.777 11:18:55 -- nvmf/common.sh@717 -- # local ip 00:27:47.777 11:18:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:47.777 11:18:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:47.777 11:18:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.777 11:18:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.777 11:18:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:47.777 11:18:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.777 11:18:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:47.777 11:18:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:47.777 11:18:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:47.777 11:18:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:47.777 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.777 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:27:48.036 nvme0n1 00:27:48.036 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.036 11:18:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.036 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.036 11:18:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:48.036 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.036 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.036 11:18:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.036 11:18:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.036 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.036 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.293 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.293 11:18:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:48.293 11:18:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:48.293 11:18:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:48.293 11:18:56 -- host/auth.sh@44 -- # digest=sha256 00:27:48.293 11:18:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.293 11:18:56 -- host/auth.sh@44 -- # keyid=1 00:27:48.293 11:18:56 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:48.293 11:18:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:48.293 11:18:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:48.293 11:18:56 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:48.293 11:18:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:27:48.293 11:18:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:48.293 11:18:56 -- host/auth.sh@68 -- # digest=sha256 00:27:48.293 11:18:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:48.293 11:18:56 -- host/auth.sh@68 -- # keyid=1 00:27:48.293 11:18:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:48.293 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.293 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.293 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.293 11:18:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:48.293 11:18:56 -- nvmf/common.sh@717 -- # local ip 00:27:48.293 11:18:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:48.293 11:18:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:48.293 11:18:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.293 11:18:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.293 11:18:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:48.293 11:18:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.293 11:18:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:48.293 11:18:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:48.293 11:18:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:48.293 11:18:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:48.293 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.293 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.550 nvme0n1 00:27:48.550 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.550 11:18:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.550 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.550 11:18:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:48.550 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.550 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.550 11:18:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.550 11:18:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.550 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.550 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.550 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.550 11:18:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:48.550 11:18:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:48.550 11:18:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:48.550 11:18:56 -- host/auth.sh@44 -- # digest=sha256 00:27:48.550 11:18:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.550 11:18:56 -- host/auth.sh@44 -- # keyid=2 00:27:48.550 11:18:56 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:48.550 11:18:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:48.550 11:18:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:48.550 11:18:56 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:48.550 11:18:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:27:48.550 11:18:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:48.550 11:18:56 -- host/auth.sh@68 -- # digest=sha256 00:27:48.550 11:18:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:48.550 11:18:56 -- host/auth.sh@68 -- # keyid=2 00:27:48.550 11:18:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:48.550 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.550 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:48.550 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.550 11:18:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:48.550 11:18:56 -- nvmf/common.sh@717 -- # local ip 00:27:48.550 11:18:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:48.550 11:18:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:48.550 11:18:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.551 11:18:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.551 11:18:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:48.551 11:18:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.551 11:18:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:48.551 11:18:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:48.551 11:18:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:48.551 11:18:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.551 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.551 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 nvme0n1 00:27:49.148 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.148 11:18:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.148 11:18:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:49.148 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.148 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.148 11:18:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.148 11:18:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.148 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.148 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.148 11:18:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:49.148 11:18:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:49.148 11:18:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:49.148 11:18:57 -- host/auth.sh@44 -- # digest=sha256 00:27:49.148 11:18:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.148 11:18:57 -- host/auth.sh@44 -- # keyid=3 00:27:49.148 11:18:57 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:49.148 11:18:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:49.148 11:18:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:49.148 11:18:57 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:49.148 11:18:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:27:49.148 11:18:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:49.148 11:18:57 -- host/auth.sh@68 -- # digest=sha256 00:27:49.148 11:18:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:49.148 11:18:57 -- host/auth.sh@68 -- # keyid=3 00:27:49.148 11:18:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.148 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.148 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.148 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.148 11:18:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:49.148 11:18:57 -- nvmf/common.sh@717 -- # local ip 00:27:49.148 11:18:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:49.148 11:18:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:49.148 11:18:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.148 11:18:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.148 11:18:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:49.148 11:18:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.148 11:18:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:49.148 11:18:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:49.148 11:18:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:49.148 11:18:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:49.148 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.148 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.406 nvme0n1 00:27:49.406 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.406 11:18:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:49.406 11:18:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.406 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.406 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.406 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.406 11:18:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.406 11:18:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.406 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.406 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.406 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.406 11:18:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:49.406 11:18:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:49.406 11:18:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:49.406 11:18:57 -- host/auth.sh@44 -- # digest=sha256 00:27:49.406 11:18:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.406 11:18:57 -- host/auth.sh@44 -- # keyid=4 00:27:49.406 11:18:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:49.406 11:18:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:49.406 11:18:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:49.406 11:18:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:49.406 11:18:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:27:49.406 11:18:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:49.406 11:18:57 -- host/auth.sh@68 -- # digest=sha256 00:27:49.406 11:18:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:49.406 11:18:57 -- host/auth.sh@68 -- # keyid=4 00:27:49.406 11:18:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.406 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.406 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.406 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.406 11:18:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:49.406 11:18:57 -- nvmf/common.sh@717 -- # local ip 00:27:49.406 11:18:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:49.406 11:18:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:49.406 11:18:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.406 11:18:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.406 11:18:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:49.406 11:18:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.406 11:18:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:49.406 11:18:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:49.406 11:18:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:49.406 11:18:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.406 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.406 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.972 nvme0n1 00:27:49.972 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.972 11:18:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.972 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.972 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.972 11:18:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:49.972 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.972 11:18:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.972 11:18:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.972 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.972 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:27:49.972 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.972 11:18:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.972 11:18:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:49.972 11:18:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:49.972 11:18:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:49.972 11:18:58 -- host/auth.sh@44 -- # digest=sha256 00:27:49.972 11:18:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.972 11:18:58 -- host/auth.sh@44 -- # keyid=0 00:27:49.972 11:18:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:49.972 11:18:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:49.972 11:18:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:54.157 11:19:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:54.157 11:19:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:27:54.157 11:19:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.157 11:19:01 -- host/auth.sh@68 -- # digest=sha256 00:27:54.157 11:19:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:54.157 11:19:01 -- host/auth.sh@68 -- # keyid=0 00:27:54.157 11:19:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.157 11:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.157 11:19:01 -- common/autotest_common.sh@10 -- # set +x 00:27:54.157 11:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.157 11:19:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.157 11:19:01 -- nvmf/common.sh@717 -- # local ip 00:27:54.157 11:19:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.157 11:19:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.157 11:19:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.157 11:19:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.157 11:19:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.157 11:19:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.157 11:19:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.157 11:19:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.157 11:19:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.157 11:19:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:54.157 11:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.157 11:19:01 -- common/autotest_common.sh@10 -- # set +x 00:27:54.415 nvme0n1 00:27:54.415 11:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.415 11:19:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.415 11:19:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.415 11:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.415 11:19:02 -- common/autotest_common.sh@10 -- # set +x 00:27:54.415 11:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.415 11:19:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.415 11:19:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.415 11:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.415 11:19:02 -- common/autotest_common.sh@10 -- # set +x 00:27:54.415 11:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.415 11:19:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.415 11:19:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:54.415 11:19:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.415 11:19:02 -- host/auth.sh@44 -- # digest=sha256 00:27:54.415 11:19:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.415 11:19:02 -- host/auth.sh@44 -- # keyid=1 00:27:54.415 11:19:02 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:54.415 11:19:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.415 11:19:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:54.415 11:19:02 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:54.415 11:19:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:27:54.415 11:19:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.415 11:19:02 -- host/auth.sh@68 -- # digest=sha256 00:27:54.415 11:19:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:54.415 11:19:02 -- host/auth.sh@68 -- # keyid=1 00:27:54.415 11:19:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.415 11:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.415 11:19:02 -- common/autotest_common.sh@10 -- # set +x 00:27:54.415 11:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.415 11:19:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.415 11:19:02 -- nvmf/common.sh@717 -- # local ip 00:27:54.416 11:19:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.416 11:19:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.416 11:19:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.416 11:19:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.416 11:19:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.416 11:19:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.416 11:19:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.416 11:19:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.416 11:19:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.416 11:19:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:54.416 11:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.416 11:19:02 -- common/autotest_common.sh@10 -- # set +x 00:27:54.982 nvme0n1 00:27:54.982 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.982 11:19:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.982 11:19:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.982 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.982 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:54.982 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.240 11:19:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.240 11:19:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.240 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.240 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.240 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.240 11:19:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.240 11:19:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:55.240 11:19:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.240 11:19:03 -- host/auth.sh@44 -- # digest=sha256 00:27:55.240 11:19:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.240 11:19:03 -- host/auth.sh@44 -- # keyid=2 00:27:55.240 11:19:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:55.240 11:19:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.240 11:19:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:55.240 11:19:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:55.240 11:19:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:27:55.240 11:19:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.240 11:19:03 -- host/auth.sh@68 -- # digest=sha256 00:27:55.240 11:19:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:55.240 11:19:03 -- host/auth.sh@68 -- # keyid=2 00:27:55.240 11:19:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.240 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.240 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.240 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.240 11:19:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.240 11:19:03 -- nvmf/common.sh@717 -- # local ip 00:27:55.240 11:19:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.240 11:19:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.240 11:19:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.240 11:19:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.240 11:19:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.240 11:19:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.240 11:19:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.240 11:19:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.240 11:19:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.240 11:19:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:55.240 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.240 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 nvme0n1 00:27:55.808 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 11:19:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.808 11:19:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.808 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 11:19:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.808 11:19:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.808 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 11:19:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.808 11:19:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:55.808 11:19:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.808 11:19:03 -- host/auth.sh@44 -- # digest=sha256 00:27:55.808 11:19:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.808 11:19:03 -- host/auth.sh@44 -- # keyid=3 00:27:55.808 11:19:03 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:55.808 11:19:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.808 11:19:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:55.808 11:19:03 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:55.808 11:19:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:27:55.808 11:19:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.808 11:19:03 -- host/auth.sh@68 -- # digest=sha256 00:27:55.808 11:19:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:55.808 11:19:03 -- host/auth.sh@68 -- # keyid=3 00:27:55.808 11:19:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.808 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 11:19:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.808 11:19:03 -- nvmf/common.sh@717 -- # local ip 00:27:55.808 11:19:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.808 11:19:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.808 11:19:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.808 11:19:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.808 11:19:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.808 11:19:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.808 11:19:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.808 11:19:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.808 11:19:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.808 11:19:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:55.808 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:27:56.440 nvme0n1 00:27:56.440 11:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.440 11:19:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.440 11:19:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.440 11:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.440 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:27:56.440 11:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.440 11:19:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.440 11:19:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.440 11:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.440 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:27:56.440 11:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.440 11:19:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.440 11:19:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:56.440 11:19:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.440 11:19:04 -- host/auth.sh@44 -- # digest=sha256 00:27:56.440 11:19:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.440 11:19:04 -- host/auth.sh@44 -- # keyid=4 00:27:56.440 11:19:04 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:56.441 11:19:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.441 11:19:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:56.441 11:19:04 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:56.441 11:19:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:27:56.441 11:19:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.441 11:19:04 -- host/auth.sh@68 -- # digest=sha256 00:27:56.441 11:19:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:56.441 11:19:04 -- host/auth.sh@68 -- # keyid=4 00:27:56.441 11:19:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.441 11:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.441 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:27:56.441 11:19:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.441 11:19:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.441 11:19:04 -- nvmf/common.sh@717 -- # local ip 00:27:56.441 11:19:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.441 11:19:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.441 11:19:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.441 11:19:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.441 11:19:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.441 11:19:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.441 11:19:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.441 11:19:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.441 11:19:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.441 11:19:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.441 11:19:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.441 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 nvme0n1 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:57.375 11:19:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.375 11:19:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.375 11:19:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:57.375 11:19:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.375 11:19:05 -- host/auth.sh@44 -- # digest=sha384 00:27:57.375 11:19:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.375 11:19:05 -- host/auth.sh@44 -- # keyid=0 00:27:57.375 11:19:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:57.375 11:19:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:57.375 11:19:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:57.375 11:19:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:57.375 11:19:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:27:57.375 11:19:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.375 11:19:05 -- host/auth.sh@68 -- # digest=sha384 00:27:57.375 11:19:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:57.375 11:19:05 -- host/auth.sh@68 -- # keyid=0 00:27:57.375 11:19:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.375 11:19:05 -- nvmf/common.sh@717 -- # local ip 00:27:57.375 11:19:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.375 11:19:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.375 11:19:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.375 11:19:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.375 11:19:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.375 11:19:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.375 11:19:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.375 11:19:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.375 11:19:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.375 11:19:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 nvme0n1 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.375 11:19:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.375 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.375 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.376 11:19:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.376 11:19:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:57.376 11:19:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.376 11:19:05 -- host/auth.sh@44 -- # digest=sha384 00:27:57.376 11:19:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.376 11:19:05 -- host/auth.sh@44 -- # keyid=1 00:27:57.376 11:19:05 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:57.376 11:19:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:57.376 11:19:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:57.376 11:19:05 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:57.376 11:19:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:27:57.376 11:19:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.376 11:19:05 -- host/auth.sh@68 -- # digest=sha384 00:27:57.376 11:19:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:57.376 11:19:05 -- host/auth.sh@68 -- # keyid=1 00:27:57.376 11:19:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.376 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.376 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.376 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.376 11:19:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.376 11:19:05 -- nvmf/common.sh@717 -- # local ip 00:27:57.376 11:19:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.376 11:19:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.376 11:19:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.376 11:19:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.376 11:19:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.376 11:19:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.376 11:19:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.376 11:19:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.376 11:19:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.376 11:19:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:57.376 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.376 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 nvme0n1 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.634 11:19:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:57.634 11:19:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.634 11:19:05 -- host/auth.sh@44 -- # digest=sha384 00:27:57.634 11:19:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.634 11:19:05 -- host/auth.sh@44 -- # keyid=2 00:27:57.634 11:19:05 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:57.634 11:19:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:57.634 11:19:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:57.634 11:19:05 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:57.634 11:19:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:27:57.634 11:19:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.634 11:19:05 -- host/auth.sh@68 -- # digest=sha384 00:27:57.634 11:19:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:57.634 11:19:05 -- host/auth.sh@68 -- # keyid=2 00:27:57.634 11:19:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.634 11:19:05 -- nvmf/common.sh@717 -- # local ip 00:27:57.634 11:19:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.634 11:19:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.634 11:19:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.634 11:19:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.634 11:19:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.634 11:19:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.634 11:19:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.634 11:19:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.634 11:19:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.634 11:19:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 nvme0n1 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.634 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.634 11:19:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.634 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.634 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.892 11:19:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:57.892 11:19:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.892 11:19:05 -- host/auth.sh@44 -- # digest=sha384 00:27:57.892 11:19:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.892 11:19:05 -- host/auth.sh@44 -- # keyid=3 00:27:57.892 11:19:05 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:57.892 11:19:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:57.892 11:19:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:57.892 11:19:05 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:57.892 11:19:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:27:57.892 11:19:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.892 11:19:05 -- host/auth.sh@68 -- # digest=sha384 00:27:57.892 11:19:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:57.892 11:19:05 -- host/auth.sh@68 -- # keyid=3 00:27:57.892 11:19:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.892 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.892 11:19:05 -- nvmf/common.sh@717 -- # local ip 00:27:57.892 11:19:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.892 11:19:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.892 11:19:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.892 11:19:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.892 11:19:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.892 11:19:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.892 11:19:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.892 11:19:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.892 11:19:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.892 11:19:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:57.892 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 nvme0n1 00:27:57.892 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.892 11:19:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.892 11:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 11:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.892 11:19:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.892 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.892 11:19:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:57.892 11:19:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.892 11:19:06 -- host/auth.sh@44 -- # digest=sha384 00:27:57.892 11:19:06 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.892 11:19:06 -- host/auth.sh@44 -- # keyid=4 00:27:57.892 11:19:06 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:57.892 11:19:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:57.892 11:19:06 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:57.892 11:19:06 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:57.892 11:19:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:27:57.892 11:19:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.892 11:19:06 -- host/auth.sh@68 -- # digest=sha384 00:27:57.892 11:19:06 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:57.892 11:19:06 -- host/auth.sh@68 -- # keyid=4 00:27:57.892 11:19:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.892 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:57.892 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.892 11:19:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.892 11:19:06 -- nvmf/common.sh@717 -- # local ip 00:27:57.892 11:19:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.892 11:19:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.892 11:19:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.892 11:19:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.892 11:19:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.892 11:19:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.892 11:19:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.892 11:19:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.892 11:19:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.892 11:19:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.892 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.892 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.150 nvme0n1 00:27:58.150 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.150 11:19:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.150 11:19:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.150 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.150 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.150 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.150 11:19:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.150 11:19:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.150 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.150 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.150 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.150 11:19:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.150 11:19:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.150 11:19:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:58.150 11:19:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.150 11:19:06 -- host/auth.sh@44 -- # digest=sha384 00:27:58.150 11:19:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.150 11:19:06 -- host/auth.sh@44 -- # keyid=0 00:27:58.150 11:19:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:58.150 11:19:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:58.150 11:19:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:58.150 11:19:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:58.150 11:19:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:27:58.150 11:19:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.150 11:19:06 -- host/auth.sh@68 -- # digest=sha384 00:27:58.150 11:19:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:58.150 11:19:06 -- host/auth.sh@68 -- # keyid=0 00:27:58.150 11:19:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.150 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.150 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.150 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.150 11:19:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.150 11:19:06 -- nvmf/common.sh@717 -- # local ip 00:27:58.150 11:19:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.150 11:19:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.150 11:19:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.150 11:19:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.150 11:19:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.150 11:19:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.150 11:19:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.150 11:19:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.150 11:19:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.150 11:19:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:58.150 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.150 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 nvme0n1 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.409 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.409 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 11:19:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.409 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.409 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.409 11:19:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:58.409 11:19:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.409 11:19:06 -- host/auth.sh@44 -- # digest=sha384 00:27:58.409 11:19:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.409 11:19:06 -- host/auth.sh@44 -- # keyid=1 00:27:58.409 11:19:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:58.409 11:19:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:58.409 11:19:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:58.409 11:19:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:58.409 11:19:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:27:58.409 11:19:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.409 11:19:06 -- host/auth.sh@68 -- # digest=sha384 00:27:58.409 11:19:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:58.409 11:19:06 -- host/auth.sh@68 -- # keyid=1 00:27:58.409 11:19:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.409 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.409 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.409 11:19:06 -- nvmf/common.sh@717 -- # local ip 00:27:58.409 11:19:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.409 11:19:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.409 11:19:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.409 11:19:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.409 11:19:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.409 11:19:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.409 11:19:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.409 11:19:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.409 11:19:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.409 11:19:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:58.409 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.409 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 nvme0n1 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.409 11:19:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.409 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.409 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.409 11:19:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.409 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.667 11:19:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.667 11:19:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.667 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.667 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.667 11:19:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.667 11:19:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:58.667 11:19:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.667 11:19:06 -- host/auth.sh@44 -- # digest=sha384 00:27:58.667 11:19:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.667 11:19:06 -- host/auth.sh@44 -- # keyid=2 00:27:58.667 11:19:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:58.667 11:19:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:58.668 11:19:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:58.668 11:19:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:58.668 11:19:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:27:58.668 11:19:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.668 11:19:06 -- host/auth.sh@68 -- # digest=sha384 00:27:58.668 11:19:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:58.668 11:19:06 -- host/auth.sh@68 -- # keyid=2 00:27:58.668 11:19:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.668 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.668 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.668 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.668 11:19:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.668 11:19:06 -- nvmf/common.sh@717 -- # local ip 00:27:58.668 11:19:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.668 11:19:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.668 11:19:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.668 11:19:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.668 11:19:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.668 11:19:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.668 11:19:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.668 11:19:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.668 11:19:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.668 11:19:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.668 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.668 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.668 nvme0n1 00:27:58.668 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.668 11:19:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.668 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.668 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.668 11:19:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.668 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.668 11:19:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.668 11:19:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.668 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.668 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.926 11:19:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:58.926 11:19:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.926 11:19:06 -- host/auth.sh@44 -- # digest=sha384 00:27:58.926 11:19:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.926 11:19:06 -- host/auth.sh@44 -- # keyid=3 00:27:58.926 11:19:06 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:58.926 11:19:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:58.926 11:19:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:58.926 11:19:06 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:58.926 11:19:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:27:58.926 11:19:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.926 11:19:06 -- host/auth.sh@68 -- # digest=sha384 00:27:58.926 11:19:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:58.926 11:19:06 -- host/auth.sh@68 -- # keyid=3 00:27:58.926 11:19:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.926 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 11:19:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.926 11:19:06 -- nvmf/common.sh@717 -- # local ip 00:27:58.926 11:19:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.926 11:19:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.926 11:19:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.926 11:19:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.926 11:19:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.926 11:19:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.926 11:19:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.926 11:19:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.926 11:19:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.926 11:19:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:58.926 11:19:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 nvme0n1 00:27:58.926 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.926 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 11:19:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.926 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.926 11:19:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.926 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.926 11:19:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:58.926 11:19:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.926 11:19:07 -- host/auth.sh@44 -- # digest=sha384 00:27:58.926 11:19:07 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.926 11:19:07 -- host/auth.sh@44 -- # keyid=4 00:27:58.926 11:19:07 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:58.926 11:19:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:58.926 11:19:07 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:58.926 11:19:07 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:27:58.926 11:19:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:27:58.926 11:19:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.926 11:19:07 -- host/auth.sh@68 -- # digest=sha384 00:27:58.926 11:19:07 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:58.926 11:19:07 -- host/auth.sh@68 -- # keyid=4 00:27:58.926 11:19:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.926 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:58.926 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.926 11:19:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.926 11:19:07 -- nvmf/common.sh@717 -- # local ip 00:27:58.926 11:19:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.926 11:19:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.926 11:19:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.926 11:19:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.926 11:19:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.926 11:19:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.926 11:19:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.926 11:19:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.926 11:19:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.926 11:19:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.926 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.926 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.185 nvme0n1 00:27:59.185 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.185 11:19:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.185 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.185 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.185 11:19:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.185 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.185 11:19:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.185 11:19:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.185 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.185 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.185 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.185 11:19:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.185 11:19:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.185 11:19:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:59.185 11:19:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.185 11:19:07 -- host/auth.sh@44 -- # digest=sha384 00:27:59.185 11:19:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.185 11:19:07 -- host/auth.sh@44 -- # keyid=0 00:27:59.185 11:19:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:59.185 11:19:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:59.185 11:19:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:59.185 11:19:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:27:59.185 11:19:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:27:59.185 11:19:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.185 11:19:07 -- host/auth.sh@68 -- # digest=sha384 00:27:59.185 11:19:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:59.185 11:19:07 -- host/auth.sh@68 -- # keyid=0 00:27:59.185 11:19:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.185 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.185 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.185 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.185 11:19:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.185 11:19:07 -- nvmf/common.sh@717 -- # local ip 00:27:59.185 11:19:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.185 11:19:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.185 11:19:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.185 11:19:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.185 11:19:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.185 11:19:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.185 11:19:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.185 11:19:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.185 11:19:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.185 11:19:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:59.185 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.185 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.442 nvme0n1 00:27:59.442 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.442 11:19:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.442 11:19:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.442 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.442 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.442 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.442 11:19:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.442 11:19:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.442 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.442 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.442 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.442 11:19:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.442 11:19:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:59.442 11:19:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.442 11:19:07 -- host/auth.sh@44 -- # digest=sha384 00:27:59.442 11:19:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.442 11:19:07 -- host/auth.sh@44 -- # keyid=1 00:27:59.442 11:19:07 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:59.442 11:19:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:59.442 11:19:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:59.442 11:19:07 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:27:59.442 11:19:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:27:59.442 11:19:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.442 11:19:07 -- host/auth.sh@68 -- # digest=sha384 00:27:59.442 11:19:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:59.442 11:19:07 -- host/auth.sh@68 -- # keyid=1 00:27:59.442 11:19:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.442 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.442 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.442 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.442 11:19:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.442 11:19:07 -- nvmf/common.sh@717 -- # local ip 00:27:59.442 11:19:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.442 11:19:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.442 11:19:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.442 11:19:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.442 11:19:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.442 11:19:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.442 11:19:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.442 11:19:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.442 11:19:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.442 11:19:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:59.442 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.442 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.700 nvme0n1 00:27:59.700 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.700 11:19:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.700 11:19:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.700 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.700 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.700 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.700 11:19:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.700 11:19:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.700 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.700 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.700 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.700 11:19:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.700 11:19:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:59.700 11:19:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.700 11:19:07 -- host/auth.sh@44 -- # digest=sha384 00:27:59.700 11:19:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.700 11:19:07 -- host/auth.sh@44 -- # keyid=2 00:27:59.700 11:19:07 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:59.700 11:19:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:59.700 11:19:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:59.700 11:19:07 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:27:59.700 11:19:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:27:59.700 11:19:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.700 11:19:07 -- host/auth.sh@68 -- # digest=sha384 00:27:59.700 11:19:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:59.700 11:19:07 -- host/auth.sh@68 -- # keyid=2 00:27:59.700 11:19:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.700 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.700 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.700 11:19:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.700 11:19:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.700 11:19:07 -- nvmf/common.sh@717 -- # local ip 00:27:59.700 11:19:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.700 11:19:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.700 11:19:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.700 11:19:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.700 11:19:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.700 11:19:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.700 11:19:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.700 11:19:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.700 11:19:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.700 11:19:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.700 11:19:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.700 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:27:59.957 nvme0n1 00:27:59.957 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.957 11:19:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.957 11:19:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.957 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.957 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:27:59.957 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.957 11:19:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.957 11:19:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.957 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.957 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:27:59.958 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.958 11:19:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.958 11:19:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:59.958 11:19:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.958 11:19:08 -- host/auth.sh@44 -- # digest=sha384 00:27:59.958 11:19:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.958 11:19:08 -- host/auth.sh@44 -- # keyid=3 00:27:59.958 11:19:08 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:59.958 11:19:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:27:59.958 11:19:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:59.958 11:19:08 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:27:59.958 11:19:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:27:59.958 11:19:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.958 11:19:08 -- host/auth.sh@68 -- # digest=sha384 00:27:59.958 11:19:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:59.958 11:19:08 -- host/auth.sh@68 -- # keyid=3 00:27:59.958 11:19:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.958 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.958 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:27:59.958 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.958 11:19:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.958 11:19:08 -- nvmf/common.sh@717 -- # local ip 00:27:59.958 11:19:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.958 11:19:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.958 11:19:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.958 11:19:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.958 11:19:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.958 11:19:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.958 11:19:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.958 11:19:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.958 11:19:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.958 11:19:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:59.958 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.958 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.216 nvme0n1 00:28:00.216 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.216 11:19:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.216 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.216 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.216 11:19:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.216 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.216 11:19:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.216 11:19:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.216 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.216 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.474 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.474 11:19:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.474 11:19:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:00.474 11:19:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.474 11:19:08 -- host/auth.sh@44 -- # digest=sha384 00:28:00.474 11:19:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.474 11:19:08 -- host/auth.sh@44 -- # keyid=4 00:28:00.474 11:19:08 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:00.474 11:19:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:00.474 11:19:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:00.474 11:19:08 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:00.474 11:19:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:28:00.474 11:19:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.474 11:19:08 -- host/auth.sh@68 -- # digest=sha384 00:28:00.474 11:19:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:00.474 11:19:08 -- host/auth.sh@68 -- # keyid=4 00:28:00.474 11:19:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.474 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.474 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.474 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.474 11:19:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.474 11:19:08 -- nvmf/common.sh@717 -- # local ip 00:28:00.474 11:19:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.474 11:19:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.474 11:19:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.474 11:19:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.474 11:19:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.474 11:19:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.474 11:19:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.474 11:19:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.474 11:19:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.474 11:19:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.474 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.474 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.474 nvme0n1 00:28:00.474 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.474 11:19:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.474 11:19:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.474 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.474 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.474 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.735 11:19:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.735 11:19:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.735 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.735 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.735 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.735 11:19:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.735 11:19:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.735 11:19:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:00.735 11:19:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.735 11:19:08 -- host/auth.sh@44 -- # digest=sha384 00:28:00.735 11:19:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.735 11:19:08 -- host/auth.sh@44 -- # keyid=0 00:28:00.735 11:19:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:00.735 11:19:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:00.735 11:19:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:00.735 11:19:08 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:00.735 11:19:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:28:00.735 11:19:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.735 11:19:08 -- host/auth.sh@68 -- # digest=sha384 00:28:00.735 11:19:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:00.735 11:19:08 -- host/auth.sh@68 -- # keyid=0 00:28:00.735 11:19:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.735 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.735 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:00.735 11:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.735 11:19:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.735 11:19:08 -- nvmf/common.sh@717 -- # local ip 00:28:00.735 11:19:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.735 11:19:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.735 11:19:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.735 11:19:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.735 11:19:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.735 11:19:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.735 11:19:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.735 11:19:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.735 11:19:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.735 11:19:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:00.735 11:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.735 11:19:08 -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 nvme0n1 00:28:01.005 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.005 11:19:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.005 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.005 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 11:19:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.005 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.005 11:19:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.005 11:19:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.005 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.005 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.005 11:19:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.005 11:19:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:01.005 11:19:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.005 11:19:09 -- host/auth.sh@44 -- # digest=sha384 00:28:01.005 11:19:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.005 11:19:09 -- host/auth.sh@44 -- # keyid=1 00:28:01.005 11:19:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:01.005 11:19:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.005 11:19:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.005 11:19:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:01.005 11:19:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:28:01.005 11:19:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.005 11:19:09 -- host/auth.sh@68 -- # digest=sha384 00:28:01.005 11:19:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.005 11:19:09 -- host/auth.sh@68 -- # keyid=1 00:28:01.005 11:19:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.005 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.005 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.005 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.005 11:19:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.005 11:19:09 -- nvmf/common.sh@717 -- # local ip 00:28:01.005 11:19:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.005 11:19:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.005 11:19:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.005 11:19:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.005 11:19:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.005 11:19:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.005 11:19:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.005 11:19:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.005 11:19:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.005 11:19:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:01.005 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.005 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.577 nvme0n1 00:28:01.577 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.577 11:19:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.577 11:19:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.577 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.577 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.577 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.577 11:19:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.577 11:19:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.577 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.577 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.577 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.577 11:19:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.577 11:19:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:01.577 11:19:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.577 11:19:09 -- host/auth.sh@44 -- # digest=sha384 00:28:01.577 11:19:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.577 11:19:09 -- host/auth.sh@44 -- # keyid=2 00:28:01.577 11:19:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:01.577 11:19:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.577 11:19:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.577 11:19:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:01.577 11:19:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:28:01.577 11:19:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.577 11:19:09 -- host/auth.sh@68 -- # digest=sha384 00:28:01.577 11:19:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.577 11:19:09 -- host/auth.sh@68 -- # keyid=2 00:28:01.577 11:19:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.577 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.577 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.577 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.577 11:19:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.577 11:19:09 -- nvmf/common.sh@717 -- # local ip 00:28:01.577 11:19:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.577 11:19:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.577 11:19:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.577 11:19:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.577 11:19:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.577 11:19:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.577 11:19:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.577 11:19:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.577 11:19:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.577 11:19:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.577 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.577 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.836 nvme0n1 00:28:01.836 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.836 11:19:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.836 11:19:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.836 11:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.836 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.836 11:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.836 11:19:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.836 11:19:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.836 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.836 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:01.836 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.836 11:19:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.836 11:19:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:01.836 11:19:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.836 11:19:10 -- host/auth.sh@44 -- # digest=sha384 00:28:01.836 11:19:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.836 11:19:10 -- host/auth.sh@44 -- # keyid=3 00:28:01.836 11:19:10 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:01.836 11:19:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.836 11:19:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.836 11:19:10 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:01.836 11:19:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:28:01.836 11:19:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.836 11:19:10 -- host/auth.sh@68 -- # digest=sha384 00:28:01.836 11:19:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.836 11:19:10 -- host/auth.sh@68 -- # keyid=3 00:28:01.836 11:19:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.836 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.836 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:01.836 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.836 11:19:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.836 11:19:10 -- nvmf/common.sh@717 -- # local ip 00:28:01.836 11:19:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.836 11:19:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.836 11:19:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.836 11:19:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.836 11:19:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.836 11:19:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.836 11:19:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.836 11:19:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.836 11:19:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.836 11:19:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:01.836 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.836 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.450 nvme0n1 00:28:02.450 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.450 11:19:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.450 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.450 11:19:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.450 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.450 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.450 11:19:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.450 11:19:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.450 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.450 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.450 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.450 11:19:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.450 11:19:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:02.450 11:19:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.450 11:19:10 -- host/auth.sh@44 -- # digest=sha384 00:28:02.450 11:19:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.450 11:19:10 -- host/auth.sh@44 -- # keyid=4 00:28:02.450 11:19:10 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:02.450 11:19:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.450 11:19:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:02.450 11:19:10 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:02.450 11:19:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:28:02.450 11:19:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.450 11:19:10 -- host/auth.sh@68 -- # digest=sha384 00:28:02.450 11:19:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:02.450 11:19:10 -- host/auth.sh@68 -- # keyid=4 00:28:02.450 11:19:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.450 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.450 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.450 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.450 11:19:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.450 11:19:10 -- nvmf/common.sh@717 -- # local ip 00:28:02.450 11:19:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.450 11:19:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.450 11:19:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.450 11:19:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.450 11:19:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.450 11:19:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.450 11:19:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.450 11:19:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.450 11:19:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.450 11:19:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.450 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.450 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 nvme0n1 00:28:02.731 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.731 11:19:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.731 11:19:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.731 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.731 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.731 11:19:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.731 11:19:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.731 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.731 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.731 11:19:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.731 11:19:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.731 11:19:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:02.731 11:19:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.731 11:19:10 -- host/auth.sh@44 -- # digest=sha384 00:28:02.731 11:19:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.731 11:19:10 -- host/auth.sh@44 -- # keyid=0 00:28:02.731 11:19:10 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:02.731 11:19:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.731 11:19:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:02.731 11:19:10 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:02.731 11:19:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:28:02.731 11:19:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.731 11:19:10 -- host/auth.sh@68 -- # digest=sha384 00:28:02.731 11:19:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:02.731 11:19:10 -- host/auth.sh@68 -- # keyid=0 00:28:02.731 11:19:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.731 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.731 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 11:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.731 11:19:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.731 11:19:10 -- nvmf/common.sh@717 -- # local ip 00:28:02.731 11:19:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.731 11:19:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.731 11:19:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.731 11:19:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.731 11:19:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.731 11:19:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.731 11:19:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.732 11:19:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.732 11:19:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.732 11:19:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:02.732 11:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.732 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 nvme0n1 00:28:03.673 11:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.673 11:19:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.673 11:19:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.673 11:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.673 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 11:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.673 11:19:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.673 11:19:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.673 11:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.673 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 11:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.673 11:19:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.673 11:19:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:03.673 11:19:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.673 11:19:11 -- host/auth.sh@44 -- # digest=sha384 00:28:03.673 11:19:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.673 11:19:11 -- host/auth.sh@44 -- # keyid=1 00:28:03.673 11:19:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:03.673 11:19:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.673 11:19:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:03.673 11:19:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:03.673 11:19:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:28:03.673 11:19:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.673 11:19:11 -- host/auth.sh@68 -- # digest=sha384 00:28:03.673 11:19:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:03.673 11:19:11 -- host/auth.sh@68 -- # keyid=1 00:28:03.673 11:19:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.673 11:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.673 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:03.673 11:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.673 11:19:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.673 11:19:11 -- nvmf/common.sh@717 -- # local ip 00:28:03.673 11:19:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.673 11:19:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.673 11:19:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.673 11:19:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.673 11:19:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.673 11:19:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.673 11:19:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.673 11:19:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.673 11:19:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.673 11:19:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:03.673 11:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.673 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:04.239 nvme0n1 00:28:04.239 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.239 11:19:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.239 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.239 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.239 11:19:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.239 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.239 11:19:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.239 11:19:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.239 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.239 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.239 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.239 11:19:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.239 11:19:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:04.239 11:19:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.239 11:19:12 -- host/auth.sh@44 -- # digest=sha384 00:28:04.239 11:19:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.239 11:19:12 -- host/auth.sh@44 -- # keyid=2 00:28:04.239 11:19:12 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:04.239 11:19:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.239 11:19:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:04.239 11:19:12 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:04.239 11:19:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:28:04.239 11:19:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.239 11:19:12 -- host/auth.sh@68 -- # digest=sha384 00:28:04.239 11:19:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:04.239 11:19:12 -- host/auth.sh@68 -- # keyid=2 00:28:04.239 11:19:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.239 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.239 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.239 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.239 11:19:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.239 11:19:12 -- nvmf/common.sh@717 -- # local ip 00:28:04.239 11:19:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.239 11:19:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.239 11:19:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.239 11:19:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.239 11:19:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.239 11:19:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.239 11:19:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.239 11:19:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.239 11:19:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.239 11:19:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.239 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.239 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.808 nvme0n1 00:28:04.808 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.808 11:19:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.808 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.808 11:19:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.808 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.808 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.808 11:19:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.808 11:19:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.808 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.808 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.808 11:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.808 11:19:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.808 11:19:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:04.808 11:19:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.808 11:19:12 -- host/auth.sh@44 -- # digest=sha384 00:28:04.808 11:19:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.808 11:19:12 -- host/auth.sh@44 -- # keyid=3 00:28:04.808 11:19:12 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:04.808 11:19:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.808 11:19:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:04.808 11:19:12 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:04.808 11:19:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:28:04.808 11:19:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.808 11:19:12 -- host/auth.sh@68 -- # digest=sha384 00:28:04.808 11:19:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:04.808 11:19:12 -- host/auth.sh@68 -- # keyid=3 00:28:04.808 11:19:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.808 11:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.808 11:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:04.808 11:19:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.808 11:19:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.808 11:19:13 -- nvmf/common.sh@717 -- # local ip 00:28:04.808 11:19:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.808 11:19:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.808 11:19:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.808 11:19:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.808 11:19:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.808 11:19:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.808 11:19:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.808 11:19:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.808 11:19:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.808 11:19:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:04.808 11:19:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.808 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:28:05.744 nvme0n1 00:28:05.744 11:19:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.744 11:19:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.744 11:19:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.744 11:19:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.744 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:28:05.744 11:19:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.744 11:19:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.744 11:19:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.744 11:19:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.744 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:28:05.744 11:19:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.744 11:19:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.744 11:19:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:05.744 11:19:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.744 11:19:13 -- host/auth.sh@44 -- # digest=sha384 00:28:05.744 11:19:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.744 11:19:13 -- host/auth.sh@44 -- # keyid=4 00:28:05.744 11:19:13 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:05.744 11:19:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.744 11:19:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:05.744 11:19:13 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:05.744 11:19:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:28:05.744 11:19:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.744 11:19:13 -- host/auth.sh@68 -- # digest=sha384 00:28:05.744 11:19:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:05.744 11:19:13 -- host/auth.sh@68 -- # keyid=4 00:28:05.744 11:19:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.744 11:19:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.744 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:28:05.744 11:19:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.744 11:19:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.744 11:19:13 -- nvmf/common.sh@717 -- # local ip 00:28:05.744 11:19:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.744 11:19:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.744 11:19:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.744 11:19:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.744 11:19:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.744 11:19:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.744 11:19:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.744 11:19:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.744 11:19:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.744 11:19:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.744 11:19:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.744 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 nvme0n1 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.325 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.325 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 11:19:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.325 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.325 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:06.325 11:19:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.325 11:19:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.325 11:19:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:06.325 11:19:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.325 11:19:14 -- host/auth.sh@44 -- # digest=sha512 00:28:06.325 11:19:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.325 11:19:14 -- host/auth.sh@44 -- # keyid=0 00:28:06.325 11:19:14 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:06.325 11:19:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:06.325 11:19:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.325 11:19:14 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:06.325 11:19:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:28:06.325 11:19:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.325 11:19:14 -- host/auth.sh@68 -- # digest=sha512 00:28:06.325 11:19:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.325 11:19:14 -- host/auth.sh@68 -- # keyid=0 00:28:06.325 11:19:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.325 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.325 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.325 11:19:14 -- nvmf/common.sh@717 -- # local ip 00:28:06.325 11:19:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.325 11:19:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.325 11:19:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.325 11:19:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.325 11:19:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.325 11:19:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.325 11:19:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.325 11:19:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.325 11:19:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.325 11:19:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:06.325 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.325 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 nvme0n1 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.325 11:19:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.325 11:19:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.325 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.325 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.325 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.583 11:19:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.583 11:19:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.583 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.583 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.583 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.583 11:19:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.583 11:19:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:06.583 11:19:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.583 11:19:14 -- host/auth.sh@44 -- # digest=sha512 00:28:06.583 11:19:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.583 11:19:14 -- host/auth.sh@44 -- # keyid=1 00:28:06.583 11:19:14 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:06.583 11:19:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:06.583 11:19:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.583 11:19:14 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:06.583 11:19:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:28:06.583 11:19:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.583 11:19:14 -- host/auth.sh@68 -- # digest=sha512 00:28:06.583 11:19:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.583 11:19:14 -- host/auth.sh@68 -- # keyid=1 00:28:06.583 11:19:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.583 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.583 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.583 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.583 11:19:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.583 11:19:14 -- nvmf/common.sh@717 -- # local ip 00:28:06.583 11:19:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.583 11:19:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.583 11:19:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.584 11:19:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:06.584 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.584 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.584 nvme0n1 00:28:06.584 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.584 11:19:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.584 11:19:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.584 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.584 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.584 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.584 11:19:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.584 11:19:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.584 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.584 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.584 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.584 11:19:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.584 11:19:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:06.584 11:19:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.584 11:19:14 -- host/auth.sh@44 -- # digest=sha512 00:28:06.584 11:19:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.584 11:19:14 -- host/auth.sh@44 -- # keyid=2 00:28:06.584 11:19:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:06.584 11:19:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:06.584 11:19:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.584 11:19:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:06.584 11:19:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:28:06.584 11:19:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.584 11:19:14 -- host/auth.sh@68 -- # digest=sha512 00:28:06.584 11:19:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.584 11:19:14 -- host/auth.sh@68 -- # keyid=2 00:28:06.584 11:19:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.584 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.584 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.584 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.584 11:19:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.584 11:19:14 -- nvmf/common.sh@717 -- # local ip 00:28:06.584 11:19:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.584 11:19:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.584 11:19:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.584 11:19:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.584 11:19:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.584 11:19:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:06.584 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.584 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.842 nvme0n1 00:28:06.842 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.842 11:19:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.842 11:19:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.842 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.842 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.842 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.842 11:19:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.842 11:19:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.842 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.842 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.842 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.842 11:19:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.842 11:19:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:06.842 11:19:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.842 11:19:14 -- host/auth.sh@44 -- # digest=sha512 00:28:06.842 11:19:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.842 11:19:14 -- host/auth.sh@44 -- # keyid=3 00:28:06.842 11:19:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:06.842 11:19:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:06.842 11:19:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:06.842 11:19:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:06.842 11:19:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:28:06.842 11:19:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.842 11:19:14 -- host/auth.sh@68 -- # digest=sha512 00:28:06.842 11:19:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:06.842 11:19:14 -- host/auth.sh@68 -- # keyid=3 00:28:06.842 11:19:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.842 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.842 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.842 11:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.842 11:19:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.842 11:19:14 -- nvmf/common.sh@717 -- # local ip 00:28:06.842 11:19:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.842 11:19:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.842 11:19:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.842 11:19:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.842 11:19:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.842 11:19:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.842 11:19:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.842 11:19:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.842 11:19:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.842 11:19:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:06.842 11:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.842 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 nvme0n1 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.118 11:19:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:07.118 11:19:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # digest=sha512 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # keyid=4 00:28:07.118 11:19:15 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:07.118 11:19:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.118 11:19:15 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:07.118 11:19:15 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:07.118 11:19:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:28:07.118 11:19:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.118 11:19:15 -- host/auth.sh@68 -- # digest=sha512 00:28:07.118 11:19:15 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:07.118 11:19:15 -- host/auth.sh@68 -- # keyid=4 00:28:07.118 11:19:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.118 11:19:15 -- nvmf/common.sh@717 -- # local ip 00:28:07.118 11:19:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.118 11:19:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.118 11:19:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.118 11:19:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.118 11:19:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.118 11:19:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.118 11:19:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.118 11:19:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.118 11:19:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.118 11:19:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 nvme0n1 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.118 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.118 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.118 11:19:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.118 11:19:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.118 11:19:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:07.118 11:19:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # digest=sha512 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.118 11:19:15 -- host/auth.sh@44 -- # keyid=0 00:28:07.118 11:19:15 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:07.118 11:19:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.119 11:19:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.119 11:19:15 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:07.119 11:19:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:28:07.119 11:19:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.119 11:19:15 -- host/auth.sh@68 -- # digest=sha512 00:28:07.119 11:19:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.119 11:19:15 -- host/auth.sh@68 -- # keyid=0 00:28:07.119 11:19:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.119 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.119 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.399 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.399 11:19:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.399 11:19:15 -- nvmf/common.sh@717 -- # local ip 00:28:07.399 11:19:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.399 11:19:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.399 11:19:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.399 11:19:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.399 11:19:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.399 11:19:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.399 11:19:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.399 11:19:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.399 11:19:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.399 11:19:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:07.399 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.399 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.399 nvme0n1 00:28:07.399 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.399 11:19:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.399 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.399 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.399 11:19:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.399 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.399 11:19:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.399 11:19:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.399 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.399 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.399 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.399 11:19:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.399 11:19:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:07.399 11:19:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.399 11:19:15 -- host/auth.sh@44 -- # digest=sha512 00:28:07.399 11:19:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.399 11:19:15 -- host/auth.sh@44 -- # keyid=1 00:28:07.399 11:19:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:07.399 11:19:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.399 11:19:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.399 11:19:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:07.399 11:19:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:28:07.399 11:19:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.400 11:19:15 -- host/auth.sh@68 -- # digest=sha512 00:28:07.400 11:19:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.400 11:19:15 -- host/auth.sh@68 -- # keyid=1 00:28:07.400 11:19:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.400 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.400 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.400 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.400 11:19:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.400 11:19:15 -- nvmf/common.sh@717 -- # local ip 00:28:07.400 11:19:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.400 11:19:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.400 11:19:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.400 11:19:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.400 11:19:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.400 11:19:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.400 11:19:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.400 11:19:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.400 11:19:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.400 11:19:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:07.400 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.400 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.681 nvme0n1 00:28:07.681 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.681 11:19:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.681 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.681 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.681 11:19:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.681 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.681 11:19:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.681 11:19:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.681 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.681 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.681 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.681 11:19:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.682 11:19:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:07.682 11:19:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.682 11:19:15 -- host/auth.sh@44 -- # digest=sha512 00:28:07.682 11:19:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.682 11:19:15 -- host/auth.sh@44 -- # keyid=2 00:28:07.682 11:19:15 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:07.682 11:19:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.682 11:19:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.682 11:19:15 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:07.682 11:19:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:28:07.682 11:19:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.682 11:19:15 -- host/auth.sh@68 -- # digest=sha512 00:28:07.682 11:19:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.682 11:19:15 -- host/auth.sh@68 -- # keyid=2 00:28:07.682 11:19:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.682 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.682 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.682 11:19:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.682 11:19:15 -- nvmf/common.sh@717 -- # local ip 00:28:07.682 11:19:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.682 11:19:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.682 11:19:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.682 11:19:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.682 11:19:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.682 11:19:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.682 11:19:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.682 11:19:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.682 11:19:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.682 11:19:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:07.682 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.682 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 nvme0n1 00:28:07.682 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.682 11:19:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.682 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.682 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 11:19:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.958 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.958 11:19:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.958 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.958 11:19:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:07.958 11:19:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.958 11:19:15 -- host/auth.sh@44 -- # digest=sha512 00:28:07.958 11:19:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.958 11:19:15 -- host/auth.sh@44 -- # keyid=3 00:28:07.958 11:19:15 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:07.958 11:19:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.958 11:19:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.958 11:19:15 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:07.958 11:19:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:28:07.958 11:19:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.958 11:19:15 -- host/auth.sh@68 -- # digest=sha512 00:28:07.958 11:19:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.958 11:19:15 -- host/auth.sh@68 -- # keyid=3 00:28:07.958 11:19:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.958 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 11:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.958 11:19:15 -- nvmf/common.sh@717 -- # local ip 00:28:07.958 11:19:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.958 11:19:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.958 11:19:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.958 11:19:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.958 11:19:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.958 11:19:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.958 11:19:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.958 11:19:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.958 11:19:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.958 11:19:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:07.958 11:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 nvme0n1 00:28:07.958 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.958 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.958 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.958 11:19:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.958 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:07.958 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.958 11:19:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.958 11:19:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:07.958 11:19:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.958 11:19:16 -- host/auth.sh@44 -- # digest=sha512 00:28:07.958 11:19:16 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.958 11:19:16 -- host/auth.sh@44 -- # keyid=4 00:28:07.958 11:19:16 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:07.958 11:19:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:07.958 11:19:16 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:07.958 11:19:16 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:07.958 11:19:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:28:07.958 11:19:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.958 11:19:16 -- host/auth.sh@68 -- # digest=sha512 00:28:07.958 11:19:16 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:07.958 11:19:16 -- host/auth.sh@68 -- # keyid=4 00:28:07.958 11:19:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.958 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.958 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.225 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.225 11:19:16 -- nvmf/common.sh@717 -- # local ip 00:28:08.225 11:19:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.225 11:19:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.225 11:19:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.225 11:19:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.225 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.225 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.225 nvme0n1 00:28:08.225 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.225 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.225 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.225 11:19:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.225 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.225 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.225 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.225 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.225 11:19:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.225 11:19:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:08.225 11:19:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.225 11:19:16 -- host/auth.sh@44 -- # digest=sha512 00:28:08.225 11:19:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.225 11:19:16 -- host/auth.sh@44 -- # keyid=0 00:28:08.225 11:19:16 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:08.225 11:19:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:08.225 11:19:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:08.225 11:19:16 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:08.225 11:19:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:28:08.225 11:19:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.225 11:19:16 -- host/auth.sh@68 -- # digest=sha512 00:28:08.225 11:19:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:08.225 11:19:16 -- host/auth.sh@68 -- # keyid=0 00:28:08.225 11:19:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.225 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.225 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.225 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.225 11:19:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.225 11:19:16 -- nvmf/common.sh@717 -- # local ip 00:28:08.225 11:19:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.225 11:19:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.225 11:19:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.225 11:19:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.225 11:19:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.225 11:19:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:08.225 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.225 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.495 nvme0n1 00:28:08.495 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.495 11:19:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.495 11:19:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.495 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.495 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.495 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.495 11:19:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.495 11:19:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.495 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.495 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.495 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.495 11:19:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.495 11:19:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:08.495 11:19:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.495 11:19:16 -- host/auth.sh@44 -- # digest=sha512 00:28:08.495 11:19:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.495 11:19:16 -- host/auth.sh@44 -- # keyid=1 00:28:08.495 11:19:16 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:08.495 11:19:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:08.495 11:19:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:08.495 11:19:16 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:08.495 11:19:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:28:08.495 11:19:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.495 11:19:16 -- host/auth.sh@68 -- # digest=sha512 00:28:08.495 11:19:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:08.495 11:19:16 -- host/auth.sh@68 -- # keyid=1 00:28:08.495 11:19:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.495 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.495 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.495 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.495 11:19:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.495 11:19:16 -- nvmf/common.sh@717 -- # local ip 00:28:08.495 11:19:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.495 11:19:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.495 11:19:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.495 11:19:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.495 11:19:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.495 11:19:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.495 11:19:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.495 11:19:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.495 11:19:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.495 11:19:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:08.495 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.495 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.784 nvme0n1 00:28:08.784 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.784 11:19:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.784 11:19:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.784 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.784 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.784 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.784 11:19:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.784 11:19:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.784 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.784 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.784 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.784 11:19:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.784 11:19:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:08.784 11:19:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.784 11:19:16 -- host/auth.sh@44 -- # digest=sha512 00:28:08.784 11:19:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.784 11:19:16 -- host/auth.sh@44 -- # keyid=2 00:28:08.784 11:19:16 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:08.784 11:19:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:08.784 11:19:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:08.784 11:19:16 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:08.784 11:19:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:28:08.784 11:19:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.784 11:19:16 -- host/auth.sh@68 -- # digest=sha512 00:28:08.784 11:19:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:08.784 11:19:16 -- host/auth.sh@68 -- # keyid=2 00:28:08.784 11:19:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.784 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.784 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.784 11:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.784 11:19:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.784 11:19:16 -- nvmf/common.sh@717 -- # local ip 00:28:08.784 11:19:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.784 11:19:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.784 11:19:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.784 11:19:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.784 11:19:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.784 11:19:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.784 11:19:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.784 11:19:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.784 11:19:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.784 11:19:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:08.784 11:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.784 11:19:16 -- common/autotest_common.sh@10 -- # set +x 00:28:09.055 nvme0n1 00:28:09.055 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.055 11:19:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.055 11:19:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.055 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.055 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.055 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.055 11:19:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.055 11:19:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.055 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.055 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.055 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.055 11:19:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.055 11:19:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:09.055 11:19:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.055 11:19:17 -- host/auth.sh@44 -- # digest=sha512 00:28:09.055 11:19:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.055 11:19:17 -- host/auth.sh@44 -- # keyid=3 00:28:09.055 11:19:17 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:09.055 11:19:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:09.055 11:19:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:09.055 11:19:17 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:09.055 11:19:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:28:09.055 11:19:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.055 11:19:17 -- host/auth.sh@68 -- # digest=sha512 00:28:09.055 11:19:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:09.055 11:19:17 -- host/auth.sh@68 -- # keyid=3 00:28:09.055 11:19:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.055 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.055 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.055 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.055 11:19:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.055 11:19:17 -- nvmf/common.sh@717 -- # local ip 00:28:09.055 11:19:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.055 11:19:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.055 11:19:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.055 11:19:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.055 11:19:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.055 11:19:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.055 11:19:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.055 11:19:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.055 11:19:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.055 11:19:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:09.055 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.055 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 nvme0n1 00:28:09.325 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.325 11:19:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.325 11:19:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.325 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.325 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.325 11:19:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.325 11:19:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.325 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.325 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.325 11:19:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.325 11:19:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:09.325 11:19:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.325 11:19:17 -- host/auth.sh@44 -- # digest=sha512 00:28:09.325 11:19:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.325 11:19:17 -- host/auth.sh@44 -- # keyid=4 00:28:09.325 11:19:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:09.325 11:19:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:09.325 11:19:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:09.325 11:19:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:09.325 11:19:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:28:09.325 11:19:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.325 11:19:17 -- host/auth.sh@68 -- # digest=sha512 00:28:09.325 11:19:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:09.325 11:19:17 -- host/auth.sh@68 -- # keyid=4 00:28:09.325 11:19:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.325 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.325 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.325 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.325 11:19:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.325 11:19:17 -- nvmf/common.sh@717 -- # local ip 00:28:09.325 11:19:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.325 11:19:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.325 11:19:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.325 11:19:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.325 11:19:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.325 11:19:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.325 11:19:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.325 11:19:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.325 11:19:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.325 11:19:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.325 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.325 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.602 nvme0n1 00:28:09.602 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.602 11:19:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.602 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.602 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.602 11:19:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.602 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.602 11:19:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.602 11:19:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.602 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.602 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.602 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.602 11:19:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.602 11:19:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.602 11:19:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:09.602 11:19:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.602 11:19:17 -- host/auth.sh@44 -- # digest=sha512 00:28:09.602 11:19:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.602 11:19:17 -- host/auth.sh@44 -- # keyid=0 00:28:09.602 11:19:17 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:09.602 11:19:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:09.602 11:19:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:09.602 11:19:17 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:09.602 11:19:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:28:09.602 11:19:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.602 11:19:17 -- host/auth.sh@68 -- # digest=sha512 00:28:09.602 11:19:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:09.602 11:19:17 -- host/auth.sh@68 -- # keyid=0 00:28:09.602 11:19:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:09.602 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.602 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:09.602 11:19:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.602 11:19:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.602 11:19:17 -- nvmf/common.sh@717 -- # local ip 00:28:09.602 11:19:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.602 11:19:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.602 11:19:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.602 11:19:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.602 11:19:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.602 11:19:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.602 11:19:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.602 11:19:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.602 11:19:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.602 11:19:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:09.602 11:19:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.602 11:19:17 -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 nvme0n1 00:28:10.177 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.177 11:19:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.177 11:19:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.177 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.177 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.177 11:19:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.177 11:19:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.177 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.177 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.177 11:19:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.177 11:19:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:10.177 11:19:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.177 11:19:18 -- host/auth.sh@44 -- # digest=sha512 00:28:10.177 11:19:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.177 11:19:18 -- host/auth.sh@44 -- # keyid=1 00:28:10.177 11:19:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:10.177 11:19:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.177 11:19:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:10.177 11:19:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:10.177 11:19:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:28:10.177 11:19:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.177 11:19:18 -- host/auth.sh@68 -- # digest=sha512 00:28:10.177 11:19:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:10.177 11:19:18 -- host/auth.sh@68 -- # keyid=1 00:28:10.177 11:19:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.177 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.177 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.177 11:19:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.177 11:19:18 -- nvmf/common.sh@717 -- # local ip 00:28:10.177 11:19:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.177 11:19:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.177 11:19:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.177 11:19:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.177 11:19:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.177 11:19:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.177 11:19:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.177 11:19:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.177 11:19:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.177 11:19:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:10.177 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.177 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 nvme0n1 00:28:10.437 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.437 11:19:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.437 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.437 11:19:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.437 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.437 11:19:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.437 11:19:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.437 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.437 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.437 11:19:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.437 11:19:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:10.437 11:19:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.437 11:19:18 -- host/auth.sh@44 -- # digest=sha512 00:28:10.437 11:19:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.437 11:19:18 -- host/auth.sh@44 -- # keyid=2 00:28:10.437 11:19:18 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:10.437 11:19:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.437 11:19:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:10.437 11:19:18 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:10.437 11:19:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:28:10.437 11:19:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.437 11:19:18 -- host/auth.sh@68 -- # digest=sha512 00:28:10.437 11:19:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:10.437 11:19:18 -- host/auth.sh@68 -- # keyid=2 00:28:10.437 11:19:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.437 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.437 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.437 11:19:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.437 11:19:18 -- nvmf/common.sh@717 -- # local ip 00:28:10.437 11:19:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.437 11:19:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.437 11:19:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.437 11:19:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.437 11:19:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.437 11:19:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.437 11:19:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.437 11:19:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.437 11:19:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.437 11:19:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:10.437 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.437 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:11.006 nvme0n1 00:28:11.006 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.006 11:19:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.006 11:19:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.006 11:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.006 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:28:11.007 11:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.007 11:19:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.007 11:19:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.007 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.007 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.007 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.007 11:19:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.007 11:19:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:11.007 11:19:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.007 11:19:19 -- host/auth.sh@44 -- # digest=sha512 00:28:11.007 11:19:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.007 11:19:19 -- host/auth.sh@44 -- # keyid=3 00:28:11.007 11:19:19 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:11.007 11:19:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.007 11:19:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:11.007 11:19:19 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:11.007 11:19:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:28:11.007 11:19:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.007 11:19:19 -- host/auth.sh@68 -- # digest=sha512 00:28:11.007 11:19:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:11.007 11:19:19 -- host/auth.sh@68 -- # keyid=3 00:28:11.007 11:19:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.007 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.007 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.007 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.007 11:19:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.007 11:19:19 -- nvmf/common.sh@717 -- # local ip 00:28:11.007 11:19:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.007 11:19:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.007 11:19:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.007 11:19:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.007 11:19:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.007 11:19:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.007 11:19:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.007 11:19:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.007 11:19:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.007 11:19:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:11.007 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.007 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 nvme0n1 00:28:11.265 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.265 11:19:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.265 11:19:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.265 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.265 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.265 11:19:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.265 11:19:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.265 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.265 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.265 11:19:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.265 11:19:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:11.265 11:19:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.265 11:19:19 -- host/auth.sh@44 -- # digest=sha512 00:28:11.265 11:19:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.265 11:19:19 -- host/auth.sh@44 -- # keyid=4 00:28:11.265 11:19:19 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:11.265 11:19:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.265 11:19:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:11.265 11:19:19 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:11.265 11:19:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:28:11.265 11:19:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.265 11:19:19 -- host/auth.sh@68 -- # digest=sha512 00:28:11.265 11:19:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:11.265 11:19:19 -- host/auth.sh@68 -- # keyid=4 00:28:11.265 11:19:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.265 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.265 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.543 11:19:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.543 11:19:19 -- nvmf/common.sh@717 -- # local ip 00:28:11.543 11:19:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.543 11:19:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.543 11:19:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.543 11:19:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.543 11:19:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.543 11:19:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.543 11:19:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.543 11:19:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.543 11:19:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.543 11:19:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.543 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.543 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.818 nvme0n1 00:28:11.818 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.818 11:19:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.818 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.818 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.818 11:19:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.818 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.818 11:19:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.818 11:19:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.818 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.818 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.818 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.818 11:19:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.818 11:19:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.818 11:19:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:11.818 11:19:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.818 11:19:19 -- host/auth.sh@44 -- # digest=sha512 00:28:11.818 11:19:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.818 11:19:19 -- host/auth.sh@44 -- # keyid=0 00:28:11.818 11:19:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:11.818 11:19:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.818 11:19:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:11.818 11:19:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MDQ3MDVkYjk3Njg1NzczNTdhOWU0MzliMDQxYjE2YzTeby4G: 00:28:11.818 11:19:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:28:11.818 11:19:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.818 11:19:19 -- host/auth.sh@68 -- # digest=sha512 00:28:11.818 11:19:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:11.818 11:19:19 -- host/auth.sh@68 -- # keyid=0 00:28:11.818 11:19:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:11.818 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.818 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:11.818 11:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.818 11:19:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.818 11:19:19 -- nvmf/common.sh@717 -- # local ip 00:28:11.818 11:19:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.818 11:19:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.818 11:19:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.818 11:19:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.818 11:19:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.818 11:19:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.818 11:19:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.818 11:19:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.818 11:19:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.818 11:19:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:11.818 11:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.818 11:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:12.400 nvme0n1 00:28:12.400 11:19:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.400 11:19:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.400 11:19:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.400 11:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:12.400 11:19:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.400 11:19:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.400 11:19:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.400 11:19:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.400 11:19:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.400 11:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:12.400 11:19:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.400 11:19:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.400 11:19:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:12.400 11:19:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.400 11:19:20 -- host/auth.sh@44 -- # digest=sha512 00:28:12.400 11:19:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.400 11:19:20 -- host/auth.sh@44 -- # keyid=1 00:28:12.400 11:19:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:12.400 11:19:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.400 11:19:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:12.400 11:19:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:12.400 11:19:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:28:12.401 11:19:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.401 11:19:20 -- host/auth.sh@68 -- # digest=sha512 00:28:12.401 11:19:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:12.401 11:19:20 -- host/auth.sh@68 -- # keyid=1 00:28:12.401 11:19:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:12.401 11:19:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.401 11:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:12.401 11:19:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.401 11:19:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.401 11:19:20 -- nvmf/common.sh@717 -- # local ip 00:28:12.401 11:19:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.401 11:19:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.401 11:19:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.401 11:19:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.401 11:19:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.401 11:19:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.401 11:19:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.401 11:19:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.401 11:19:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.401 11:19:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:12.401 11:19:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.401 11:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:13.334 nvme0n1 00:28:13.334 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.334 11:19:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.334 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.334 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.334 11:19:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.334 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.334 11:19:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.334 11:19:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.334 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.334 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.334 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.334 11:19:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.334 11:19:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:13.334 11:19:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.334 11:19:21 -- host/auth.sh@44 -- # digest=sha512 00:28:13.334 11:19:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.334 11:19:21 -- host/auth.sh@44 -- # keyid=2 00:28:13.334 11:19:21 -- host/auth.sh@45 -- # key=DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:13.334 11:19:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.334 11:19:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:13.334 11:19:21 -- host/auth.sh@49 -- # echo DHHC-1:01:NmExOGQ0YTQwYjJkZWZlYjRmZWI3NmY0ODJmYmE3ZjE2ycC4: 00:28:13.334 11:19:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:28:13.334 11:19:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.334 11:19:21 -- host/auth.sh@68 -- # digest=sha512 00:28:13.334 11:19:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:13.334 11:19:21 -- host/auth.sh@68 -- # keyid=2 00:28:13.334 11:19:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.334 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.335 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.335 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.335 11:19:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.335 11:19:21 -- nvmf/common.sh@717 -- # local ip 00:28:13.335 11:19:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.335 11:19:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.335 11:19:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.335 11:19:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.335 11:19:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.335 11:19:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.335 11:19:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.335 11:19:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.335 11:19:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.335 11:19:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:13.335 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.335 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.900 nvme0n1 00:28:13.900 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.900 11:19:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.900 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.900 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.900 11:19:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.900 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.900 11:19:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.900 11:19:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.900 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.900 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.900 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.900 11:19:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.900 11:19:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:13.900 11:19:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.900 11:19:21 -- host/auth.sh@44 -- # digest=sha512 00:28:13.900 11:19:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.900 11:19:21 -- host/auth.sh@44 -- # keyid=3 00:28:13.900 11:19:21 -- host/auth.sh@45 -- # key=DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:13.900 11:19:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.900 11:19:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:13.900 11:19:21 -- host/auth.sh@49 -- # echo DHHC-1:02:YjQ1MGZiMWMxMjhjYzg5MjgwYTNjMGE3Y2EzZDg5ZWUyN2U2MjE3YWIzN2ZmMTNl4foGxA==: 00:28:13.900 11:19:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:28:13.900 11:19:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.900 11:19:21 -- host/auth.sh@68 -- # digest=sha512 00:28:13.900 11:19:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:13.900 11:19:21 -- host/auth.sh@68 -- # keyid=3 00:28:13.900 11:19:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.900 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.900 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:13.900 11:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.900 11:19:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.900 11:19:21 -- nvmf/common.sh@717 -- # local ip 00:28:13.900 11:19:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.900 11:19:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.900 11:19:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.900 11:19:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.900 11:19:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.900 11:19:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.900 11:19:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.900 11:19:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.900 11:19:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.900 11:19:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:13.900 11:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.900 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:14.467 nvme0n1 00:28:14.467 11:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.467 11:19:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.467 11:19:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.467 11:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.467 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:28:14.467 11:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.467 11:19:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.467 11:19:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.467 11:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.467 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:28:14.467 11:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.467 11:19:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.467 11:19:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:14.467 11:19:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.467 11:19:22 -- host/auth.sh@44 -- # digest=sha512 00:28:14.467 11:19:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.467 11:19:22 -- host/auth.sh@44 -- # keyid=4 00:28:14.467 11:19:22 -- host/auth.sh@45 -- # key=DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:14.467 11:19:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.467 11:19:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:14.467 11:19:22 -- host/auth.sh@49 -- # echo DHHC-1:03:NDA4NWM5NTg3ZDJhMmVjYmI4YWY2ZjBiOWI1OWExOTNkM2FmZDkzNGM3NWU4MjUxNzhjNDA5OTcwZjQzMTA1YhbgSDY=: 00:28:14.467 11:19:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:28:14.467 11:19:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.467 11:19:22 -- host/auth.sh@68 -- # digest=sha512 00:28:14.467 11:19:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:14.467 11:19:22 -- host/auth.sh@68 -- # keyid=4 00:28:14.467 11:19:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.467 11:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.467 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:28:14.467 11:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.467 11:19:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.467 11:19:22 -- nvmf/common.sh@717 -- # local ip 00:28:14.467 11:19:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.467 11:19:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.467 11:19:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.467 11:19:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.467 11:19:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.467 11:19:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.467 11:19:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.467 11:19:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.467 11:19:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.467 11:19:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.467 11:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.467 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:28:15.402 nvme0n1 00:28:15.402 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.402 11:19:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.402 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.402 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.402 11:19:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.402 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.402 11:19:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.402 11:19:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.402 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.402 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.402 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.402 11:19:23 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:15.402 11:19:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.402 11:19:23 -- host/auth.sh@44 -- # digest=sha256 00:28:15.402 11:19:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.402 11:19:23 -- host/auth.sh@44 -- # keyid=1 00:28:15.402 11:19:23 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:15.402 11:19:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:15.402 11:19:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:15.402 11:19:23 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGNjOWIyYWMyMzkxYTg3NzBmNWJlNTY2ZWI3MmY3ZWVhZDJkNTEyNzZmZjZiZTQ5iH3bug==: 00:28:15.402 11:19:23 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:15.402 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.402 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.402 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.402 11:19:23 -- host/auth.sh@119 -- # get_main_ns_ip 00:28:15.402 11:19:23 -- nvmf/common.sh@717 -- # local ip 00:28:15.402 11:19:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.402 11:19:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.402 11:19:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.402 11:19:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.402 11:19:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.402 11:19:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.402 11:19:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.402 11:19:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.402 11:19:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.402 11:19:23 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.402 11:19:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:15.402 11:19:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.402 11:19:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:15.402 11:19:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:15.402 11:19:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:15.402 11:19:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:15.402 11:19:23 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.402 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.402 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.403 2024/04/18 11:19:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:28:15.403 request: 00:28:15.403 { 00:28:15.403 "method": "bdev_nvme_attach_controller", 00:28:15.403 "params": { 00:28:15.403 "name": "nvme0", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.1", 00:28:15.403 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:28:15.403 } 00:28:15.403 } 00:28:15.403 Got JSON-RPC error response 00:28:15.403 GoRPCClient: error on JSON-RPC call 00:28:15.403 11:19:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:15.403 11:19:23 -- common/autotest_common.sh@641 -- # es=1 00:28:15.403 11:19:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:15.403 11:19:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:15.403 11:19:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:15.403 11:19:23 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.403 11:19:23 -- host/auth.sh@121 -- # jq length 00:28:15.403 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.403 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.403 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.403 11:19:23 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:28:15.403 11:19:23 -- host/auth.sh@124 -- # get_main_ns_ip 00:28:15.403 11:19:23 -- nvmf/common.sh@717 -- # local ip 00:28:15.403 11:19:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.403 11:19:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.403 11:19:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.403 11:19:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.403 11:19:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.403 11:19:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.403 11:19:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.403 11:19:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.403 11:19:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.403 11:19:23 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.403 11:19:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:15.403 11:19:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.403 11:19:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:15.403 11:19:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:15.403 11:19:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:15.403 11:19:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:15.403 11:19:23 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.403 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.403 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.403 2024/04/18 11:19:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:28:15.403 request: 00:28:15.403 { 00:28:15.403 "method": "bdev_nvme_attach_controller", 00:28:15.403 "params": { 00:28:15.403 "name": "nvme0", 00:28:15.403 "trtype": "tcp", 00:28:15.403 "traddr": "10.0.0.1", 00:28:15.403 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:15.403 "adrfam": "ipv4", 00:28:15.403 "trsvcid": "4420", 00:28:15.403 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:15.403 "dhchap_key": "key2" 00:28:15.403 } 00:28:15.403 } 00:28:15.403 Got JSON-RPC error response 00:28:15.403 GoRPCClient: error on JSON-RPC call 00:28:15.403 11:19:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:15.403 11:19:23 -- common/autotest_common.sh@641 -- # es=1 00:28:15.403 11:19:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:15.403 11:19:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:15.403 11:19:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:15.403 11:19:23 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.403 11:19:23 -- host/auth.sh@127 -- # jq length 00:28:15.403 11:19:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.403 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:28:15.403 11:19:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.403 11:19:23 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:28:15.403 11:19:23 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:15.403 11:19:23 -- host/auth.sh@130 -- # cleanup 00:28:15.403 11:19:23 -- host/auth.sh@24 -- # nvmftestfini 00:28:15.403 11:19:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:15.403 11:19:23 -- nvmf/common.sh@117 -- # sync 00:28:15.403 11:19:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.403 11:19:23 -- nvmf/common.sh@120 -- # set +e 00:28:15.403 11:19:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.403 11:19:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.403 rmmod nvme_tcp 00:28:15.403 rmmod nvme_fabrics 00:28:15.403 11:19:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.403 11:19:23 -- nvmf/common.sh@124 -- # set -e 00:28:15.403 11:19:23 -- nvmf/common.sh@125 -- # return 0 00:28:15.403 11:19:23 -- nvmf/common.sh@478 -- # '[' -n 85934 ']' 00:28:15.403 11:19:23 -- nvmf/common.sh@479 -- # killprocess 85934 00:28:15.403 11:19:23 -- common/autotest_common.sh@936 -- # '[' -z 85934 ']' 00:28:15.403 11:19:23 -- common/autotest_common.sh@940 -- # kill -0 85934 00:28:15.403 11:19:23 -- common/autotest_common.sh@941 -- # uname 00:28:15.403 11:19:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:15.403 11:19:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85934 00:28:15.661 killing process with pid 85934 00:28:15.661 11:19:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:15.661 11:19:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:15.661 11:19:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85934' 00:28:15.661 11:19:23 -- common/autotest_common.sh@955 -- # kill 85934 00:28:15.661 11:19:23 -- common/autotest_common.sh@960 -- # wait 85934 00:28:16.598 11:19:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:16.598 11:19:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:16.598 11:19:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:16.598 11:19:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.598 11:19:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.598 11:19:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.598 11:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.598 11:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.598 11:19:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:16.598 11:19:24 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:16.598 11:19:24 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:16.598 11:19:24 -- host/auth.sh@27 -- # clean_kernel_target 00:28:16.598 11:19:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:16.598 11:19:24 -- nvmf/common.sh@675 -- # echo 0 00:28:16.598 11:19:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:16.598 11:19:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:16.598 11:19:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:16.598 11:19:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:16.598 11:19:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:16.598 11:19:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:16.598 11:19:24 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:17.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:17.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:17.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:17.422 11:19:25 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NRP /tmp/spdk.key-null.wiP /tmp/spdk.key-sha256.plr /tmp/spdk.key-sha384.mUb /tmp/spdk.key-sha512.wsp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:17.422 11:19:25 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:17.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:17.680 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:17.680 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:17.680 00:28:17.680 real 0m40.103s 00:28:17.680 user 0m35.858s 00:28:17.680 sys 0m3.870s 00:28:17.680 11:19:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:17.680 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:28:17.680 ************************************ 00:28:17.680 END TEST nvmf_auth 00:28:17.680 ************************************ 00:28:17.937 11:19:25 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:28:17.937 11:19:25 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.937 11:19:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:17.937 11:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:17.937 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:28:17.938 ************************************ 00:28:17.938 START TEST nvmf_digest 00:28:17.938 ************************************ 00:28:17.938 11:19:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.938 * Looking for test storage... 00:28:17.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:17.938 11:19:26 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:17.938 11:19:26 -- nvmf/common.sh@7 -- # uname -s 00:28:17.938 11:19:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.938 11:19:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.938 11:19:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.938 11:19:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.938 11:19:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.938 11:19:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.938 11:19:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.938 11:19:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.938 11:19:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.938 11:19:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:28:17.938 11:19:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:28:17.938 11:19:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.938 11:19:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.938 11:19:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:17.938 11:19:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.938 11:19:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:17.938 11:19:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.938 11:19:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.938 11:19:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.938 11:19:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.938 11:19:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.938 11:19:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.938 11:19:26 -- paths/export.sh@5 -- # export PATH 00:28:17.938 11:19:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.938 11:19:26 -- nvmf/common.sh@47 -- # : 0 00:28:17.938 11:19:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.938 11:19:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.938 11:19:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.938 11:19:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.938 11:19:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.938 11:19:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.938 11:19:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.938 11:19:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.938 11:19:26 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:17.938 11:19:26 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:17.938 11:19:26 -- host/digest.sh@16 -- # runtime=2 00:28:17.938 11:19:26 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:17.938 11:19:26 -- host/digest.sh@138 -- # nvmftestinit 00:28:17.938 11:19:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:17.938 11:19:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.938 11:19:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:17.938 11:19:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:17.938 11:19:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:17.938 11:19:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.938 11:19:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.938 11:19:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.938 11:19:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:17.938 11:19:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:17.938 11:19:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.938 11:19:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.938 11:19:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:17.938 11:19:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:17.938 11:19:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:17.938 11:19:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:17.938 11:19:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:17.938 11:19:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.938 11:19:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:17.938 11:19:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:17.938 11:19:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:17.938 11:19:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:17.938 11:19:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:17.938 11:19:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:17.938 Cannot find device "nvmf_tgt_br" 00:28:17.938 11:19:26 -- nvmf/common.sh@155 -- # true 00:28:17.938 11:19:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:17.938 Cannot find device "nvmf_tgt_br2" 00:28:17.938 11:19:26 -- nvmf/common.sh@156 -- # true 00:28:17.938 11:19:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:17.938 11:19:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:17.938 Cannot find device "nvmf_tgt_br" 00:28:17.938 11:19:26 -- nvmf/common.sh@158 -- # true 00:28:17.938 11:19:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:17.938 Cannot find device "nvmf_tgt_br2" 00:28:17.938 11:19:26 -- nvmf/common.sh@159 -- # true 00:28:17.938 11:19:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:18.196 11:19:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:18.196 11:19:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:18.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:18.196 11:19:26 -- nvmf/common.sh@162 -- # true 00:28:18.196 11:19:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:18.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:18.196 11:19:26 -- nvmf/common.sh@163 -- # true 00:28:18.196 11:19:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:18.196 11:19:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:18.196 11:19:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:18.196 11:19:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:18.196 11:19:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:18.196 11:19:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:18.196 11:19:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:18.196 11:19:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:18.196 11:19:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:18.196 11:19:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:18.196 11:19:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:18.196 11:19:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:18.196 11:19:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:18.196 11:19:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:18.196 11:19:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:18.196 11:19:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:18.196 11:19:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:18.196 11:19:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:18.196 11:19:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:18.196 11:19:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:18.196 11:19:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:18.196 11:19:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:18.196 11:19:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:18.196 11:19:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:18.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:28:18.196 00:28:18.196 --- 10.0.0.2 ping statistics --- 00:28:18.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.196 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:18.196 11:19:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:18.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:18.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:28:18.454 00:28:18.454 --- 10.0.0.3 ping statistics --- 00:28:18.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.454 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:18.454 11:19:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:18.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:28:18.454 00:28:18.454 --- 10.0.0.1 ping statistics --- 00:28:18.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.454 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:18.454 11:19:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.454 11:19:26 -- nvmf/common.sh@422 -- # return 0 00:28:18.454 11:19:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:18.454 11:19:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.454 11:19:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:18.454 11:19:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:18.454 11:19:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.454 11:19:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:18.454 11:19:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:18.454 11:19:26 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:18.454 11:19:26 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:18.454 11:19:26 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:18.454 11:19:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:18.454 11:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:18.454 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 ************************************ 00:28:18.454 START TEST nvmf_digest_clean 00:28:18.454 ************************************ 00:28:18.454 11:19:26 -- common/autotest_common.sh@1111 -- # run_digest 00:28:18.454 11:19:26 -- host/digest.sh@120 -- # local dsa_initiator 00:28:18.454 11:19:26 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:18.454 11:19:26 -- host/digest.sh@121 -- # dsa_initiator=false 00:28:18.454 11:19:26 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:18.454 11:19:26 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:18.454 11:19:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:18.454 11:19:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:18.454 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.454 11:19:26 -- nvmf/common.sh@470 -- # nvmfpid=87574 00:28:18.454 11:19:26 -- nvmf/common.sh@471 -- # waitforlisten 87574 00:28:18.454 11:19:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:18.454 11:19:26 -- common/autotest_common.sh@817 -- # '[' -z 87574 ']' 00:28:18.454 11:19:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.454 11:19:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:18.454 11:19:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.454 11:19:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:18.454 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 [2024-04-18 11:19:26.646854] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:18.454 [2024-04-18 11:19:26.647029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.712 [2024-04-18 11:19:26.826416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.971 [2024-04-18 11:19:27.118383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.971 [2024-04-18 11:19:27.118453] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.971 [2024-04-18 11:19:27.118478] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.971 [2024-04-18 11:19:27.118510] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.971 [2024-04-18 11:19:27.118528] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.971 [2024-04-18 11:19:27.118578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.537 11:19:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:19.537 11:19:27 -- common/autotest_common.sh@850 -- # return 0 00:28:19.537 11:19:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:19.537 11:19:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:19.537 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:28:19.537 11:19:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.537 11:19:27 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:19.537 11:19:27 -- host/digest.sh@126 -- # common_target_config 00:28:19.537 11:19:27 -- host/digest.sh@43 -- # rpc_cmd 00:28:19.537 11:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.537 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:28:19.796 null0 00:28:19.796 [2024-04-18 11:19:27.985154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.796 [2024-04-18 11:19:28.009286] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.796 11:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.796 11:19:28 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:19.796 11:19:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:19.796 11:19:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.796 11:19:28 -- host/digest.sh@80 -- # rw=randread 00:28:19.796 11:19:28 -- host/digest.sh@80 -- # bs=4096 00:28:19.796 11:19:28 -- host/digest.sh@80 -- # qd=128 00:28:19.796 11:19:28 -- host/digest.sh@80 -- # scan_dsa=false 00:28:19.796 11:19:28 -- host/digest.sh@83 -- # bperfpid=87624 00:28:19.796 11:19:28 -- host/digest.sh@84 -- # waitforlisten 87624 /var/tmp/bperf.sock 00:28:19.796 11:19:28 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:20.054 11:19:28 -- common/autotest_common.sh@817 -- # '[' -z 87624 ']' 00:28:20.054 11:19:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.054 11:19:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:20.054 11:19:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.054 11:19:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:20.054 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 [2024-04-18 11:19:28.151618] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:20.054 [2024-04-18 11:19:28.152011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87624 ] 00:28:20.313 [2024-04-18 11:19:28.323748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.586 [2024-04-18 11:19:28.587973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.158 11:19:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:21.158 11:19:29 -- common/autotest_common.sh@850 -- # return 0 00:28:21.158 11:19:29 -- host/digest.sh@86 -- # false 00:28:21.158 11:19:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.158 11:19:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.722 11:19:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.722 11:19:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.980 nvme0n1 00:28:21.980 11:19:30 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.980 11:19:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.980 Running I/O for 2 seconds... 00:28:24.514 00:28:24.514 Latency(us) 00:28:24.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:24.514 nvme0n1 : 2.01 14824.00 57.91 0.00 0.00 8625.08 4974.78 17515.99 00:28:24.514 =================================================================================================================== 00:28:24.514 Total : 14824.00 57.91 0.00 0.00 8625.08 4974.78 17515.99 00:28:24.514 0 00:28:24.514 11:19:32 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:24.514 11:19:32 -- host/digest.sh@93 -- # get_accel_stats 00:28:24.514 11:19:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:24.514 11:19:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:24.514 11:19:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:24.514 | select(.opcode=="crc32c") 00:28:24.514 | "\(.module_name) \(.executed)"' 00:28:24.514 11:19:32 -- host/digest.sh@94 -- # false 00:28:24.514 11:19:32 -- host/digest.sh@94 -- # exp_module=software 00:28:24.514 11:19:32 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.514 11:19:32 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.514 11:19:32 -- host/digest.sh@98 -- # killprocess 87624 00:28:24.514 11:19:32 -- common/autotest_common.sh@936 -- # '[' -z 87624 ']' 00:28:24.514 11:19:32 -- common/autotest_common.sh@940 -- # kill -0 87624 00:28:24.514 11:19:32 -- common/autotest_common.sh@941 -- # uname 00:28:24.514 11:19:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:24.514 11:19:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87624 00:28:24.514 11:19:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:24.514 killing process with pid 87624 00:28:24.514 11:19:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:24.514 11:19:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87624' 00:28:24.514 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.514 00:28:24.514 Latency(us) 00:28:24.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.514 =================================================================================================================== 00:28:24.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.514 11:19:32 -- common/autotest_common.sh@955 -- # kill 87624 00:28:24.514 11:19:32 -- common/autotest_common.sh@960 -- # wait 87624 00:28:25.446 11:19:33 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:25.446 11:19:33 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:25.446 11:19:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:25.446 11:19:33 -- host/digest.sh@80 -- # rw=randread 00:28:25.446 11:19:33 -- host/digest.sh@80 -- # bs=131072 00:28:25.446 11:19:33 -- host/digest.sh@80 -- # qd=16 00:28:25.446 11:19:33 -- host/digest.sh@80 -- # scan_dsa=false 00:28:25.446 11:19:33 -- host/digest.sh@83 -- # bperfpid=87727 00:28:25.446 11:19:33 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:25.446 11:19:33 -- host/digest.sh@84 -- # waitforlisten 87727 /var/tmp/bperf.sock 00:28:25.446 11:19:33 -- common/autotest_common.sh@817 -- # '[' -z 87727 ']' 00:28:25.446 11:19:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.446 11:19:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:25.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.446 11:19:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.446 11:19:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:25.446 11:19:33 -- common/autotest_common.sh@10 -- # set +x 00:28:25.446 [2024-04-18 11:19:33.500696] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:25.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.446 Zero copy mechanism will not be used. 00:28:25.446 [2024-04-18 11:19:33.500869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87727 ] 00:28:25.704 [2024-04-18 11:19:33.676015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.961 [2024-04-18 11:19:33.930040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.528 11:19:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:26.528 11:19:34 -- common/autotest_common.sh@850 -- # return 0 00:28:26.528 11:19:34 -- host/digest.sh@86 -- # false 00:28:26.528 11:19:34 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:26.528 11:19:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:26.786 11:19:34 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.786 11:19:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.353 nvme0n1 00:28:27.353 11:19:35 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.353 11:19:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.353 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.353 Zero copy mechanism will not be used. 00:28:27.353 Running I/O for 2 seconds... 00:28:29.263 00:28:29.263 Latency(us) 00:28:29.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.263 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:29.263 nvme0n1 : 2.00 5723.39 715.42 0.00 0.00 2791.12 793.13 7238.75 00:28:29.263 =================================================================================================================== 00:28:29.263 Total : 5723.39 715.42 0.00 0.00 2791.12 793.13 7238.75 00:28:29.263 0 00:28:29.263 11:19:37 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:29.263 11:19:37 -- host/digest.sh@93 -- # get_accel_stats 00:28:29.263 11:19:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:29.263 11:19:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:29.263 11:19:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:29.263 | select(.opcode=="crc32c") 00:28:29.263 | "\(.module_name) \(.executed)"' 00:28:29.538 11:19:37 -- host/digest.sh@94 -- # false 00:28:29.538 11:19:37 -- host/digest.sh@94 -- # exp_module=software 00:28:29.538 11:19:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:29.538 11:19:37 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:29.538 11:19:37 -- host/digest.sh@98 -- # killprocess 87727 00:28:29.538 11:19:37 -- common/autotest_common.sh@936 -- # '[' -z 87727 ']' 00:28:29.538 11:19:37 -- common/autotest_common.sh@940 -- # kill -0 87727 00:28:29.538 11:19:37 -- common/autotest_common.sh@941 -- # uname 00:28:29.538 11:19:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:29.538 11:19:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87727 00:28:29.538 11:19:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:29.538 11:19:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:29.538 killing process with pid 87727 00:28:29.538 11:19:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87727' 00:28:29.538 11:19:37 -- common/autotest_common.sh@955 -- # kill 87727 00:28:29.538 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.538 00:28:29.538 Latency(us) 00:28:29.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.538 =================================================================================================================== 00:28:29.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.538 11:19:37 -- common/autotest_common.sh@960 -- # wait 87727 00:28:30.917 11:19:38 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:30.917 11:19:38 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:30.917 11:19:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:30.917 11:19:38 -- host/digest.sh@80 -- # rw=randwrite 00:28:30.917 11:19:38 -- host/digest.sh@80 -- # bs=4096 00:28:30.917 11:19:38 -- host/digest.sh@80 -- # qd=128 00:28:30.917 11:19:38 -- host/digest.sh@80 -- # scan_dsa=false 00:28:30.917 11:19:38 -- host/digest.sh@83 -- # bperfpid=87830 00:28:30.917 11:19:38 -- host/digest.sh@84 -- # waitforlisten 87830 /var/tmp/bperf.sock 00:28:30.917 11:19:38 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:30.917 11:19:38 -- common/autotest_common.sh@817 -- # '[' -z 87830 ']' 00:28:30.917 11:19:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.917 11:19:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:30.917 11:19:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.917 11:19:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:30.917 11:19:38 -- common/autotest_common.sh@10 -- # set +x 00:28:30.917 [2024-04-18 11:19:38.919905] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:30.917 [2024-04-18 11:19:38.920396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87830 ] 00:28:30.917 [2024-04-18 11:19:39.080849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.175 [2024-04-18 11:19:39.318423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.761 11:19:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:31.761 11:19:39 -- common/autotest_common.sh@850 -- # return 0 00:28:31.761 11:19:39 -- host/digest.sh@86 -- # false 00:28:31.761 11:19:39 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.761 11:19:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.327 11:19:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.327 11:19:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.587 nvme0n1 00:28:32.587 11:19:40 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.587 11:19:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.845 Running I/O for 2 seconds... 00:28:34.743 00:28:34.743 Latency(us) 00:28:34.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.743 nvme0n1 : 2.01 18547.04 72.45 0.00 0.00 6891.57 3336.38 11379.43 00:28:34.743 =================================================================================================================== 00:28:34.743 Total : 18547.04 72.45 0.00 0.00 6891.57 3336.38 11379.43 00:28:34.743 0 00:28:34.743 11:19:42 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.743 11:19:42 -- host/digest.sh@93 -- # get_accel_stats 00:28:34.743 11:19:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.743 11:19:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.743 11:19:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.743 | select(.opcode=="crc32c") 00:28:34.743 | "\(.module_name) \(.executed)"' 00:28:35.001 11:19:43 -- host/digest.sh@94 -- # false 00:28:35.001 11:19:43 -- host/digest.sh@94 -- # exp_module=software 00:28:35.001 11:19:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.001 11:19:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.001 11:19:43 -- host/digest.sh@98 -- # killprocess 87830 00:28:35.001 11:19:43 -- common/autotest_common.sh@936 -- # '[' -z 87830 ']' 00:28:35.001 11:19:43 -- common/autotest_common.sh@940 -- # kill -0 87830 00:28:35.001 11:19:43 -- common/autotest_common.sh@941 -- # uname 00:28:35.001 11:19:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:35.001 11:19:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87830 00:28:35.001 killing process with pid 87830 00:28:35.001 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.001 00:28:35.001 Latency(us) 00:28:35.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.001 =================================================================================================================== 00:28:35.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.001 11:19:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:35.001 11:19:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:35.001 11:19:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87830' 00:28:35.001 11:19:43 -- common/autotest_common.sh@955 -- # kill 87830 00:28:35.001 11:19:43 -- common/autotest_common.sh@960 -- # wait 87830 00:28:35.934 11:19:44 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:35.934 11:19:44 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.934 11:19:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.934 11:19:44 -- host/digest.sh@80 -- # rw=randwrite 00:28:35.934 11:19:44 -- host/digest.sh@80 -- # bs=131072 00:28:35.934 11:19:44 -- host/digest.sh@80 -- # qd=16 00:28:35.934 11:19:44 -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.934 11:19:44 -- host/digest.sh@83 -- # bperfpid=87931 00:28:35.934 11:19:44 -- host/digest.sh@84 -- # waitforlisten 87931 /var/tmp/bperf.sock 00:28:35.934 11:19:44 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:35.934 11:19:44 -- common/autotest_common.sh@817 -- # '[' -z 87931 ']' 00:28:35.934 11:19:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.934 11:19:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:35.934 11:19:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.934 11:19:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:35.934 11:19:44 -- common/autotest_common.sh@10 -- # set +x 00:28:36.192 [2024-04-18 11:19:44.218092] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:36.192 [2024-04-18 11:19:44.218475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87931 ] 00:28:36.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.193 Zero copy mechanism will not be used. 00:28:36.193 [2024-04-18 11:19:44.393126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.450 [2024-04-18 11:19:44.623141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.059 11:19:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:37.059 11:19:45 -- common/autotest_common.sh@850 -- # return 0 00:28:37.059 11:19:45 -- host/digest.sh@86 -- # false 00:28:37.059 11:19:45 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:37.059 11:19:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:37.642 11:19:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.642 11:19:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.899 nvme0n1 00:28:37.899 11:19:46 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:37.899 11:19:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.156 Zero copy mechanism will not be used. 00:28:38.156 Running I/O for 2 seconds... 00:28:40.056 00:28:40.056 Latency(us) 00:28:40.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.056 nvme0n1 : 2.00 5719.53 714.94 0.00 0.00 2789.76 1966.08 4796.04 00:28:40.056 =================================================================================================================== 00:28:40.056 Total : 5719.53 714.94 0.00 0.00 2789.76 1966.08 4796.04 00:28:40.056 0 00:28:40.056 11:19:48 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:40.056 11:19:48 -- host/digest.sh@93 -- # get_accel_stats 00:28:40.056 11:19:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:40.056 11:19:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:40.056 11:19:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:40.056 | select(.opcode=="crc32c") 00:28:40.056 | "\(.module_name) \(.executed)"' 00:28:40.315 11:19:48 -- host/digest.sh@94 -- # false 00:28:40.315 11:19:48 -- host/digest.sh@94 -- # exp_module=software 00:28:40.315 11:19:48 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:40.315 11:19:48 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:40.315 11:19:48 -- host/digest.sh@98 -- # killprocess 87931 00:28:40.315 11:19:48 -- common/autotest_common.sh@936 -- # '[' -z 87931 ']' 00:28:40.315 11:19:48 -- common/autotest_common.sh@940 -- # kill -0 87931 00:28:40.315 11:19:48 -- common/autotest_common.sh@941 -- # uname 00:28:40.315 11:19:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:40.315 11:19:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87931 00:28:40.315 killing process with pid 87931 00:28:40.315 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.315 00:28:40.315 Latency(us) 00:28:40.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.315 =================================================================================================================== 00:28:40.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.315 11:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:40.315 11:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:40.315 11:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87931' 00:28:40.315 11:19:48 -- common/autotest_common.sh@955 -- # kill 87931 00:28:40.315 11:19:48 -- common/autotest_common.sh@960 -- # wait 87931 00:28:41.690 11:19:49 -- host/digest.sh@132 -- # killprocess 87574 00:28:41.690 11:19:49 -- common/autotest_common.sh@936 -- # '[' -z 87574 ']' 00:28:41.691 11:19:49 -- common/autotest_common.sh@940 -- # kill -0 87574 00:28:41.691 11:19:49 -- common/autotest_common.sh@941 -- # uname 00:28:41.691 11:19:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:41.691 11:19:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87574 00:28:41.691 killing process with pid 87574 00:28:41.691 11:19:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:41.691 11:19:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:41.691 11:19:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87574' 00:28:41.691 11:19:49 -- common/autotest_common.sh@955 -- # kill 87574 00:28:41.691 11:19:49 -- common/autotest_common.sh@960 -- # wait 87574 00:28:42.624 00:28:42.624 real 0m24.260s 00:28:42.624 user 0m45.767s 00:28:42.624 sys 0m4.843s 00:28:42.624 11:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:42.624 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:28:42.624 ************************************ 00:28:42.624 END TEST nvmf_digest_clean 00:28:42.624 ************************************ 00:28:42.624 11:19:50 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:42.624 11:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:42.624 11:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:42.624 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:28:42.882 ************************************ 00:28:42.882 START TEST nvmf_digest_error 00:28:42.882 ************************************ 00:28:42.882 11:19:50 -- common/autotest_common.sh@1111 -- # run_digest_error 00:28:42.882 11:19:50 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:42.882 11:19:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:42.882 11:19:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:42.882 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:28:42.882 11:19:50 -- nvmf/common.sh@470 -- # nvmfpid=88074 00:28:42.882 11:19:50 -- nvmf/common.sh@471 -- # waitforlisten 88074 00:28:42.882 11:19:50 -- common/autotest_common.sh@817 -- # '[' -z 88074 ']' 00:28:42.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.882 11:19:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:42.882 11:19:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.882 11:19:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:42.882 11:19:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.882 11:19:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:42.882 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:28:42.882 [2024-04-18 11:19:51.053889] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:42.882 [2024-04-18 11:19:51.054204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.141 [2024-04-18 11:19:51.247755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.399 [2024-04-18 11:19:51.511755] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.399 [2024-04-18 11:19:51.511822] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.399 [2024-04-18 11:19:51.511858] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.399 [2024-04-18 11:19:51.511883] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.399 [2024-04-18 11:19:51.511898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.399 [2024-04-18 11:19:51.511948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.965 11:19:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:43.965 11:19:51 -- common/autotest_common.sh@850 -- # return 0 00:28:43.965 11:19:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:43.965 11:19:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:43.965 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:28:43.965 11:19:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.965 11:19:52 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:43.965 11:19:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:43.965 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:28:43.965 [2024-04-18 11:19:52.037189] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:43.965 11:19:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:43.965 11:19:52 -- host/digest.sh@105 -- # common_target_config 00:28:43.965 11:19:52 -- host/digest.sh@43 -- # rpc_cmd 00:28:43.965 11:19:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:43.965 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:28:44.224 null0 00:28:44.224 [2024-04-18 11:19:52.374794] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.224 [2024-04-18 11:19:52.398906] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.224 11:19:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:44.224 11:19:52 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:44.224 11:19:52 -- host/digest.sh@54 -- # local rw bs qd 00:28:44.224 11:19:52 -- host/digest.sh@56 -- # rw=randread 00:28:44.224 11:19:52 -- host/digest.sh@56 -- # bs=4096 00:28:44.224 11:19:52 -- host/digest.sh@56 -- # qd=128 00:28:44.224 11:19:52 -- host/digest.sh@58 -- # bperfpid=88118 00:28:44.224 11:19:52 -- host/digest.sh@60 -- # waitforlisten 88118 /var/tmp/bperf.sock 00:28:44.224 11:19:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:44.224 11:19:52 -- common/autotest_common.sh@817 -- # '[' -z 88118 ']' 00:28:44.224 11:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.224 11:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:44.224 11:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.224 11:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:44.224 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:28:44.481 [2024-04-18 11:19:52.492879] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:44.481 [2024-04-18 11:19:52.493089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88118 ] 00:28:44.481 [2024-04-18 11:19:52.658274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.739 [2024-04-18 11:19:52.920698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.304 11:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:45.304 11:19:53 -- common/autotest_common.sh@850 -- # return 0 00:28:45.304 11:19:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.304 11:19:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.563 11:19:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:45.563 11:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.563 11:19:53 -- common/autotest_common.sh@10 -- # set +x 00:28:45.564 11:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.564 11:19:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.564 11:19:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.859 nvme0n1 00:28:45.859 11:19:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:45.859 11:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.859 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:28:45.859 11:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.859 11:19:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:45.859 11:19:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.118 Running I/O for 2 seconds... 00:28:46.118 [2024-04-18 11:19:54.133079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.133195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.133223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.150069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.150164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.170357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.170438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.170463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.187578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.187667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.187698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.202761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.202854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.202878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.221193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.221288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.238131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.238206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.238249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.255961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.256038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.256078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.273972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.274034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.274075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.290745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.290877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.118 [2024-04-18 11:19:54.307966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.118 [2024-04-18 11:19:54.308045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.118 [2024-04-18 11:19:54.308085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.119 [2024-04-18 11:19:54.326547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.119 [2024-04-18 11:19:54.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.119 [2024-04-18 11:19:54.326678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.346070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.346195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.346224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.365406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.365566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.365592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.385040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.385163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.385192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.404104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.404262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.423117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.423265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.423292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.439522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.439650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.439676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.461412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.461511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.461544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.483640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.483721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.483755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.502507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.502576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.377 [2024-04-18 11:19:54.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.377 [2024-04-18 11:19:54.521789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.377 [2024-04-18 11:19:54.521864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.378 [2024-04-18 11:19:54.521918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.378 [2024-04-18 11:19:54.540578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.378 [2024-04-18 11:19:54.540644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.378 [2024-04-18 11:19:54.540669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.378 [2024-04-18 11:19:54.559403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.378 [2024-04-18 11:19:54.559516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.378 [2024-04-18 11:19:54.559542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.378 [2024-04-18 11:19:54.578777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.378 [2024-04-18 11:19:54.578943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.378 [2024-04-18 11:19:54.578973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.378 [2024-04-18 11:19:54.597515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.378 [2024-04-18 11:19:54.597652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.378 [2024-04-18 11:19:54.597679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.616246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.616342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.616366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.634184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.634287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.634328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.651895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.652008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.652032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.670683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.670741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.670767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.688294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.688435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.705182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.705275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.705309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.721855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.721949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.721974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.740076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.740211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.757318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.757381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.774376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.774469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.635 [2024-04-18 11:19:54.774493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.635 [2024-04-18 11:19:54.792593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.635 [2024-04-18 11:19:54.792677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.636 [2024-04-18 11:19:54.792702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.636 [2024-04-18 11:19:54.809853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.636 [2024-04-18 11:19:54.809931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.636 [2024-04-18 11:19:54.809955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.636 [2024-04-18 11:19:54.827354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.636 [2024-04-18 11:19:54.827432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.636 [2024-04-18 11:19:54.827455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.636 [2024-04-18 11:19:54.844757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.636 [2024-04-18 11:19:54.844849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.636 [2024-04-18 11:19:54.844874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.894 [2024-04-18 11:19:54.863633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.894 [2024-04-18 11:19:54.863693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.894 [2024-04-18 11:19:54.863716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.894 [2024-04-18 11:19:54.882300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.894 [2024-04-18 11:19:54.882379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.894 [2024-04-18 11:19:54.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.894 [2024-04-18 11:19:54.900722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.894 [2024-04-18 11:19:54.900785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.900809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:54.917400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:54.917461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.917488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:54.937761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:54.937894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.937919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:54.956441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:54.956551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:54.976572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:54.976642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.976666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:54.997929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:54.997992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:54.998016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.018504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.018564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.018588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.037693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.037789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.037813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.057640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.057705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.057729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.075047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.075148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.094694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.094789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.094831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.895 [2024-04-18 11:19:55.111777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:46.895 [2024-04-18 11:19:55.111838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.895 [2024-04-18 11:19:55.111863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.130933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.131021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.131046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.149643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.149721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.167336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.167429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.167454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.185420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.185493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.185518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.207892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.208001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.208026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.225927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.226046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.243551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.243679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.261880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.261994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.262019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.280678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.280760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.280786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.299116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.299182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.299207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.318383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.318489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.318514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.338057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.338163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.338189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.154 [2024-04-18 11:19:55.357512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.154 [2024-04-18 11:19:55.357594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.154 [2024-04-18 11:19:55.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.375522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.375616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.375642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.393669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.393761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.393786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.411784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.411877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.411903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.429472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.429567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.429591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.447145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.447223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.447263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.464756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.464841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.483316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.483444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.483468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.501203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.501293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.501317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.519068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.519142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.519167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.538117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.538206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.556792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.556898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.574495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.574573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.592341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.592432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.592456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.611074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.611170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.611196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.413 [2024-04-18 11:19:55.628613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.413 [2024-04-18 11:19:55.628691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.413 [2024-04-18 11:19:55.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.646514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.646664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.646689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.665921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.666017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.666042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.685480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.685571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.685595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.704745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.704807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.704851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.722135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.722242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.722267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.739755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.739846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.739869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.756275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.756365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.756405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.773255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.773347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.773385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.790205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.790294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.790317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.808465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.808553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.808577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.826375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.826466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.845119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.845177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.845202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.864992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.865071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.865095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.672 [2024-04-18 11:19:55.884207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.672 [2024-04-18 11:19:55.884290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.672 [2024-04-18 11:19:55.884316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.902669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.902743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.902783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.919866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.919956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.919978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.937732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.937806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.937846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.956604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.956664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.956688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.975077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.975165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.975191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:55.994217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:55.994324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:55.994347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:56.012110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:56.012216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:56.012242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:56.030009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:56.030088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:56.030125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:56.047954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:56.048050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:56.048074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.930 [2024-04-18 11:19:56.065734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.930 [2024-04-18 11:19:56.065827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.930 [2024-04-18 11:19:56.065867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.931 [2024-04-18 11:19:56.083721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.931 [2024-04-18 11:19:56.083782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.931 [2024-04-18 11:19:56.083814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.931 [2024-04-18 11:19:56.102370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:47.931 [2024-04-18 11:19:56.102459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.931 [2024-04-18 11:19:56.102500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.931 00:28:47.931 Latency(us) 00:28:47.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.931 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:47.931 nvme0n1 : 2.01 13804.84 53.93 0.00 0.00 9259.78 4379.00 27882.59 00:28:47.931 =================================================================================================================== 00:28:47.931 Total : 13804.84 53.93 0.00 0.00 9259.78 4379.00 27882.59 00:28:47.931 0 00:28:47.931 11:19:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.931 11:19:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.931 11:19:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.931 | .driver_specific 00:28:47.931 | .nvme_error 00:28:47.931 | .status_code 00:28:47.931 | .command_transient_transport_error' 00:28:47.931 11:19:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:48.498 11:19:56 -- host/digest.sh@71 -- # (( 108 > 0 )) 00:28:48.498 11:19:56 -- host/digest.sh@73 -- # killprocess 88118 00:28:48.498 11:19:56 -- common/autotest_common.sh@936 -- # '[' -z 88118 ']' 00:28:48.498 11:19:56 -- common/autotest_common.sh@940 -- # kill -0 88118 00:28:48.498 11:19:56 -- common/autotest_common.sh@941 -- # uname 00:28:48.498 11:19:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:48.498 11:19:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88118 00:28:48.498 11:19:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:48.498 11:19:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:48.498 killing process with pid 88118 00:28:48.498 11:19:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88118' 00:28:48.498 11:19:56 -- common/autotest_common.sh@955 -- # kill 88118 00:28:48.498 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.498 00:28:48.498 Latency(us) 00:28:48.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.498 =================================================================================================================== 00:28:48.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.498 11:19:56 -- common/autotest_common.sh@960 -- # wait 88118 00:28:49.432 11:19:57 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:49.432 11:19:57 -- host/digest.sh@54 -- # local rw bs qd 00:28:49.432 11:19:57 -- host/digest.sh@56 -- # rw=randread 00:28:49.432 11:19:57 -- host/digest.sh@56 -- # bs=131072 00:28:49.432 11:19:57 -- host/digest.sh@56 -- # qd=16 00:28:49.432 11:19:57 -- host/digest.sh@58 -- # bperfpid=88215 00:28:49.432 11:19:57 -- host/digest.sh@60 -- # waitforlisten 88215 /var/tmp/bperf.sock 00:28:49.432 11:19:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:49.432 11:19:57 -- common/autotest_common.sh@817 -- # '[' -z 88215 ']' 00:28:49.432 11:19:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.432 11:19:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:49.432 11:19:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.432 11:19:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:49.432 11:19:57 -- common/autotest_common.sh@10 -- # set +x 00:28:49.432 [2024-04-18 11:19:57.633807] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:49.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.432 Zero copy mechanism will not be used. 00:28:49.432 [2024-04-18 11:19:57.634133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88215 ] 00:28:49.691 [2024-04-18 11:19:57.810228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.949 [2024-04-18 11:19:58.127082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.517 11:19:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:50.517 11:19:58 -- common/autotest_common.sh@850 -- # return 0 00:28:50.517 11:19:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.517 11:19:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.774 11:19:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:50.774 11:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:50.774 11:19:58 -- common/autotest_common.sh@10 -- # set +x 00:28:50.774 11:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:50.774 11:19:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.774 11:19:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.034 nvme0n1 00:28:51.292 11:19:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:51.292 11:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.292 11:19:59 -- common/autotest_common.sh@10 -- # set +x 00:28:51.292 11:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.292 11:19:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:51.292 11:19:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.292 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.292 Zero copy mechanism will not be used. 00:28:51.292 Running I/O for 2 seconds... 00:28:51.292 [2024-04-18 11:19:59.398893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.292 [2024-04-18 11:19:59.399027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.292 [2024-04-18 11:19:59.399051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.292 [2024-04-18 11:19:59.405318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.292 [2024-04-18 11:19:59.405411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.292 [2024-04-18 11:19:59.405433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.292 [2024-04-18 11:19:59.411863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.292 [2024-04-18 11:19:59.411985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.292 [2024-04-18 11:19:59.412007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.292 [2024-04-18 11:19:59.418315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.418416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.418437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.424639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.424699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.424722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.431045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.431142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.431165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.437701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.437803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.443746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.443810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.443832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.450462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.450526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.456970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.457079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.463447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.463507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.463529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.469731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.469798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.469819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.476579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.476643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.476666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.482810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.482944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.482986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.489273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.489340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.489363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.495457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.495540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.502167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.502242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.502280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.293 [2024-04-18 11:19:59.509641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.293 [2024-04-18 11:19:59.509727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.293 [2024-04-18 11:19:59.509755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.515273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.552 [2024-04-18 11:19:59.515334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.552 [2024-04-18 11:19:59.515355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.522595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.552 [2024-04-18 11:19:59.522670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.552 [2024-04-18 11:19:59.522692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.527335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.552 [2024-04-18 11:19:59.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.552 [2024-04-18 11:19:59.527418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.533194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.552 [2024-04-18 11:19:59.533255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.552 [2024-04-18 11:19:59.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.539965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.552 [2024-04-18 11:19:59.540037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.552 [2024-04-18 11:19:59.540059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.552 [2024-04-18 11:19:59.546674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.546739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.546761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.551351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.551422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.551443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.557348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.557408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.557429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.564221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.564281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.564303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.571137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.571195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.571217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.577257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.577354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.583827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.583901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.590994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.591086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.591160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.597991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.598108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.602656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.602717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.602738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.608795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.608858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.608879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.615482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.615569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.615591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.621660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.621722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.621744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.628143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.628202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.628224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.632472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.632543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.639206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.639266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.643915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.643973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.643995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.649912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.650005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.650027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.656809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.656895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.663477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.663541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.663564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.667474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.667543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.667570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.672688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.672748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.672774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.679487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.679557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.684028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.684118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.684140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.690445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.690507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.690530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.697239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.697322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.697351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.704289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.553 [2024-04-18 11:19:59.704359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.553 [2024-04-18 11:19:59.704383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.553 [2024-04-18 11:19:59.709193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.709251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.709274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.714400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.714460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.714483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.720996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.721057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.721080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.728041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.728161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.734870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.734957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.734979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.739494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.739583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.739604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.746483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.746580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.746608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.753117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.753197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.753225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.760070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.760162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.760185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.766746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.766803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.766825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.554 [2024-04-18 11:19:59.771122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.554 [2024-04-18 11:19:59.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.554 [2024-04-18 11:19:59.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.814 [2024-04-18 11:19:59.778045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.814 [2024-04-18 11:19:59.778121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.814 [2024-04-18 11:19:59.778145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.814 [2024-04-18 11:19:59.784989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.814 [2024-04-18 11:19:59.785059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.814 [2024-04-18 11:19:59.785080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.791908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.791980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.792001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.796526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.796582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.796603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.802842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.802972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.802991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.808268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.808334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.808355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.813130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.813207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.813228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.818397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.818481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.818502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.824126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.824179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.824210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.831243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.831314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.831335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.838107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.838187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.838220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.843020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.843121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.849228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.849331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.849353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.856699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.856791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.863219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.863327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.868008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.868111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.873731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.873789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.873811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.880923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.881054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.887923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.894375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.894496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.894517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.898844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.898897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.898918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.905274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.905329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.905351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.911887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.911971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.912023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.918121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.918218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.918238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.924043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.924116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.924139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.928537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.928593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.928613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.815 [2024-04-18 11:19:59.935422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.815 [2024-04-18 11:19:59.935505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.815 [2024-04-18 11:19:59.935526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.942173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.942226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.942256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.946748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.946802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.946825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.953352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.953408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.953432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.959664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.959715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.959735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.963916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.963981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.964013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.970644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.970722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.970743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.978024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.978092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.978168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.982998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.983084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.989011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.989176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:19:59.995851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:19:59.995940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:19:59.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.002779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.002836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.002858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.007629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.007702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.007729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.013827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.020450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.020518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.020540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.027303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.027393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.027431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.816 [2024-04-18 11:20:00.034208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:51.816 [2024-04-18 11:20:00.034265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.816 [2024-04-18 11:20:00.034287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.040685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.040746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.040767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.047292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.047353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.047375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.054089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.054188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.054211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.061034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.061144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.068382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.068453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.068512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.073211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.073269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.073290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.079341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.079398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.079419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.085741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.085815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.085836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.092118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.092231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.096904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.096961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.102538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.102604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.102641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.075 [2024-04-18 11:20:00.108965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.075 [2024-04-18 11:20:00.109034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.075 [2024-04-18 11:20:00.109060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.115226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.115310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.115332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.119648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.119732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.119753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.125154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.125252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.125274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.130250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.130321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.130347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.135761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.135816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.135838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.142393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.142489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.142510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.146878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.146938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.146959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.154205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.154277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.154299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.160846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.160925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.160952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.165770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.165835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.165858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.171717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.171806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.177264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.177351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.177372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.184386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.184460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.184491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.191414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.191490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.191511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.196508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.196565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.196587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.202241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.202298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.202320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.209270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.209361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.209383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.215306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.215367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.215389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.219842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.219957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.226746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.226819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.226854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.232935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.233018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.233053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.237718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.237786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.237828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.242510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.242585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.242614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.248374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.248430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.248452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.253521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.253596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.253617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.258425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.258493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.258515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.076 [2024-04-18 11:20:00.263511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.076 [2024-04-18 11:20:00.263576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.076 [2024-04-18 11:20:00.263598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.077 [2024-04-18 11:20:00.268873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.077 [2024-04-18 11:20:00.268934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.077 [2024-04-18 11:20:00.268955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.077 [2024-04-18 11:20:00.275245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.077 [2024-04-18 11:20:00.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.077 [2024-04-18 11:20:00.275337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.077 [2024-04-18 11:20:00.279927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.077 [2024-04-18 11:20:00.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.077 [2024-04-18 11:20:00.280005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.077 [2024-04-18 11:20:00.285942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.077 [2024-04-18 11:20:00.286038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.077 [2024-04-18 11:20:00.286059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.077 [2024-04-18 11:20:00.292674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.077 [2024-04-18 11:20:00.292733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.077 [2024-04-18 11:20:00.292755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.296792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.296854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.296875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.302310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.302397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.302418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.307734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.307805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.307832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.312923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.312977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.312999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.318619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.318686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.318707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.324238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.335 [2024-04-18 11:20:00.324318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.335 [2024-04-18 11:20:00.324338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.335 [2024-04-18 11:20:00.329408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.329500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.329521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.335202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.335273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.335297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.339409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.339494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.339514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.345243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.345307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.345329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.352019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.352103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.352158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.358555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.358610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.358631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.362639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.362721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.362742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.368648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.368704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.368724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.373737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.373822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.373844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.379270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.379350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.384997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.385054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.390037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.390144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.394968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.395069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.400735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.400790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.400812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.407082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.407148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.407171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.411910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.411963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.411984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.417391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.417475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.417502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.423088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.423193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.423214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.428837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.428895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.428917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.434116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.434195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.434216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.439888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.439944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.439965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.445222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.445277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.445298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.450777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.450834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.455494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.455552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.455574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.462225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.462314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.462335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.466328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.466384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.466406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.472129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.336 [2024-04-18 11:20:00.472183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.336 [2024-04-18 11:20:00.472204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.336 [2024-04-18 11:20:00.477522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.477579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.477601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.482282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.482352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.482373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.488753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.488811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.493846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.493946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.493967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.499093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.499189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.499209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.506282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.506339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.506361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.512937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.513020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.513056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.517852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.517934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.517971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.523687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.523757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.523777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.530684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.530745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.530767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.537248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.537348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.537369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.541394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.541476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.541496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.547952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.548033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.548054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.337 [2024-04-18 11:20:00.552797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.337 [2024-04-18 11:20:00.552895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.337 [2024-04-18 11:20:00.552931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.596 [2024-04-18 11:20:00.558800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.596 [2024-04-18 11:20:00.558856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.596 [2024-04-18 11:20:00.558878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.596 [2024-04-18 11:20:00.562984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.563075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.563136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.568831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.568886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.568908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.572750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.572815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.572843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.578365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.578423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.578444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.585043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.585122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.585144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.591220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.591303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.591322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.598044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.598129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.598153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.604456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.604536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.604558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.611349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.611418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.611438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.616859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.616916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.616937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.621648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.621704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.621725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.627535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.627592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.627619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.632609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.632674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.638371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.638486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.638512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.644097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.644165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.644187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.648973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.649050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.649076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.655725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.655781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.662325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.662428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.667967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.668051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.668088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.672001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.672103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.678627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.678687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.678709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.597 [2024-04-18 11:20:00.682879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.597 [2024-04-18 11:20:00.682961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.597 [2024-04-18 11:20:00.682981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.687644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.687725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.687746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.693452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.693511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.693532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.698424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.698510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.698530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.703037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.703149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.703174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.708263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.708316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.708337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.714068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.714135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.714157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.718971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.719074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.724685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.724760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.724782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.731171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.731231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.731253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.735078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.735167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.741420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.741523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.741544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.747135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.751541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.751597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.758037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.758157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.764765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.764833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.764856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.771545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.771599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.771630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.775692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.775772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.782288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.782367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.782388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.788946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.789030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.789050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.795296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.795362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.795384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.799796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.799895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.805506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.598 [2024-04-18 11:20:00.805567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.598 [2024-04-18 11:20:00.805589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.598 [2024-04-18 11:20:00.811789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.599 [2024-04-18 11:20:00.811877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.599 [2024-04-18 11:20:00.811898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.599 [2024-04-18 11:20:00.816247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.599 [2024-04-18 11:20:00.816329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.599 [2024-04-18 11:20:00.816348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.823485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.823571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.823607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.830510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.830582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.830612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.835605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.835687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.835709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.841015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.841103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.841138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.847693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.847752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.847773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.854288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.854374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.854395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.859 [2024-04-18 11:20:00.860896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.859 [2024-04-18 11:20:00.860995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.859 [2024-04-18 11:20:00.861016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.867577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.867703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.874335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.874440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.874489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.881100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.881173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.881204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.885634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.885720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.885741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.892345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.892431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.892452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.899266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.899352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.899389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.904783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.904845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.904867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.909540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.909594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.909618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.915097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.915158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.915195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.919895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.919961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.919981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.925267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.925330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.930682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.930762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.936461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.936543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.936565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.942489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.942549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.942571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.947809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.947896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.947931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.953123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.953181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.953204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.958985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.959089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.964788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.964843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.964865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.969836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.969888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.969910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.974945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.975026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.981798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.981859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.981881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.986834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.986935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.986957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.992406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.992464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.992506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:00.998774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:00.998861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:00.998884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:01.005317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:01.005408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:01.005447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:01.009855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:01.009912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:01.009934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.860 [2024-04-18 11:20:01.017006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.860 [2024-04-18 11:20:01.017082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.860 [2024-04-18 11:20:01.017131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.023088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.023169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.023192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.027999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.028055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.033731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.033818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.033839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.039329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.039389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.039411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.044657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.044714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.044736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.049735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.049793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.049814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.055496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.055579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.055601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.060857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.060920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.060941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.066530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.066589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.066612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.072061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.072169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.072190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.861 [2024-04-18 11:20:01.077210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:52.861 [2024-04-18 11:20:01.077276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.861 [2024-04-18 11:20:01.077324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.083301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.083385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.083405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.090234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.090293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.090316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.094541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.094597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.094618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.101377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.101461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.101482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.107223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.107347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.111518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.111601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.117598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.117685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.117705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.124101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.124208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.124229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.128366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.128422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.128443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.134661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.134770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.139611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.139672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.139693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.144856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.144913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.144935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.150124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.150204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.150225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.155329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.121 [2024-04-18 11:20:01.155386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.121 [2024-04-18 11:20:01.155408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.121 [2024-04-18 11:20:01.160687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.160741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.160763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.165566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.165640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.165665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.170673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.170728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.170750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.177733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.177821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.177858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.182522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.182587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.188608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.188669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.188689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.195686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.195744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.195765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.202346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.202429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.202464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.207257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.207339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.207359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.212595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.212652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.212674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.219363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.219432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.219483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.225214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.225271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.225292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.229437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.229547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.229575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.235095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.235203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.235228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.241224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.241280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.241302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.246115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.246207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.246228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.252304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.252402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.252440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.257988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.258047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.258068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.262889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.262945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.262966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.268172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.268270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.268291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.273877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.273934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.273955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.279056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.279152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.279177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.285136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.285222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.285259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.291144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.291214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.122 [2024-04-18 11:20:01.291236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.122 [2024-04-18 11:20:01.296051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.122 [2024-04-18 11:20:01.296134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.296168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.301817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.301874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.301894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.307425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.307506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.307537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.312540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.312594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.312615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.318251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.318339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.318359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.321940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.321999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.322026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.328349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.328411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.328438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.123 [2024-04-18 11:20:01.335022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.123 [2024-04-18 11:20:01.335091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.123 [2024-04-18 11:20:01.335157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.381 [2024-04-18 11:20:01.341600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.341658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.341679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.347970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.348040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.348067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.354421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.354498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.354519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.358135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.358215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.358252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.364966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.365028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.365050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.371864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.371933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.371956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.378756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.378817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.378841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.385649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.385707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.385729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.382 [2024-04-18 11:20:01.392328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:28:53.382 [2024-04-18 11:20:01.392385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.382 [2024-04-18 11:20:01.392408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.382 00:28:53.382 Latency(us) 00:28:53.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.382 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:53.382 nvme0n1 : 2.00 5302.85 662.86 0.00 0.00 3011.92 837.82 7685.59 00:28:53.382 =================================================================================================================== 00:28:53.382 Total : 5302.85 662.86 0.00 0.00 3011.92 837.82 7685.59 00:28:53.382 0 00:28:53.382 11:20:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:53.382 11:20:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:53.382 11:20:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:53.382 11:20:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:53.382 | .driver_specific 00:28:53.382 | .nvme_error 00:28:53.382 | .status_code 00:28:53.382 | .command_transient_transport_error' 00:28:53.662 11:20:01 -- host/digest.sh@71 -- # (( 342 > 0 )) 00:28:53.662 11:20:01 -- host/digest.sh@73 -- # killprocess 88215 00:28:53.662 11:20:01 -- common/autotest_common.sh@936 -- # '[' -z 88215 ']' 00:28:53.662 11:20:01 -- common/autotest_common.sh@940 -- # kill -0 88215 00:28:53.662 11:20:01 -- common/autotest_common.sh@941 -- # uname 00:28:53.662 11:20:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:53.662 11:20:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88215 00:28:53.662 11:20:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:53.662 11:20:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:53.662 killing process with pid 88215 00:28:53.662 11:20:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88215' 00:28:53.662 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.662 00:28:53.662 Latency(us) 00:28:53.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.662 =================================================================================================================== 00:28:53.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.662 11:20:01 -- common/autotest_common.sh@955 -- # kill 88215 00:28:53.662 11:20:01 -- common/autotest_common.sh@960 -- # wait 88215 00:28:55.066 11:20:03 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:55.066 11:20:03 -- host/digest.sh@54 -- # local rw bs qd 00:28:55.066 11:20:03 -- host/digest.sh@56 -- # rw=randwrite 00:28:55.066 11:20:03 -- host/digest.sh@56 -- # bs=4096 00:28:55.066 11:20:03 -- host/digest.sh@56 -- # qd=128 00:28:55.066 11:20:03 -- host/digest.sh@58 -- # bperfpid=88322 00:28:55.066 11:20:03 -- host/digest.sh@60 -- # waitforlisten 88322 /var/tmp/bperf.sock 00:28:55.066 11:20:03 -- common/autotest_common.sh@817 -- # '[' -z 88322 ']' 00:28:55.066 11:20:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:55.066 11:20:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.066 11:20:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:55.066 11:20:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.066 11:20:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:55.066 11:20:03 -- common/autotest_common.sh@10 -- # set +x 00:28:55.066 [2024-04-18 11:20:03.137673] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:55.066 [2024-04-18 11:20:03.137874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88322 ] 00:28:55.325 [2024-04-18 11:20:03.317896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.583 [2024-04-18 11:20:03.621800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.149 11:20:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:56.149 11:20:04 -- common/autotest_common.sh@850 -- # return 0 00:28:56.149 11:20:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.149 11:20:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.407 11:20:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.407 11:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.407 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:28:56.407 11:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.407 11:20:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.407 11:20:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.664 nvme0n1 00:28:56.664 11:20:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:56.664 11:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.664 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:28:56.664 11:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.665 11:20:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:56.665 11:20:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:56.922 Running I/O for 2 seconds... 00:28:56.922 [2024-04-18 11:20:04.962869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:28:56.922 [2024-04-18 11:20:04.964280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.922 [2024-04-18 11:20:04.964337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:56.922 [2024-04-18 11:20:04.981083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:28:56.922 [2024-04-18 11:20:04.983280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.922 [2024-04-18 11:20:04.983328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:56.922 [2024-04-18 11:20:04.991803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:56.922 [2024-04-18 11:20:04.992770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.922 [2024-04-18 11:20:04.992816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.009718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:28:56.923 [2024-04-18 11:20:05.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.011527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.023725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:28:56.923 [2024-04-18 11:20:05.024999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.025046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.038033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:28:56.923 [2024-04-18 11:20:05.039055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.039100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.053272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:28:56.923 [2024-04-18 11:20:05.054665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.070789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:28:56.923 [2024-04-18 11:20:05.073002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.073048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.081258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:28:56.923 [2024-04-18 11:20:05.082278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.082322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.099747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:28:56.923 [2024-04-18 11:20:05.101784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.101840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.114421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:28:56.923 [2024-04-18 11:20:05.115815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.115887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:56.923 [2024-04-18 11:20:05.130090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:28:56.923 [2024-04-18 11:20:05.131723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.923 [2024-04-18 11:20:05.131767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.145278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:28:57.181 [2024-04-18 11:20:05.146096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.146148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.163088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:28:57.181 [2024-04-18 11:20:05.164990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.165035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.177573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:28:57.181 [2024-04-18 11:20:05.179086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.179138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.191984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:28:57.181 [2024-04-18 11:20:05.193428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.193473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.206460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:28:57.181 [2024-04-18 11:20:05.207642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.207688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.221429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:28:57.181 [2024-04-18 11:20:05.222454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.222495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.238322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:28:57.181 [2024-04-18 11:20:05.239568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.239613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.253199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:28:57.181 [2024-04-18 11:20:05.254127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.254195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.267401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:28:57.181 [2024-04-18 11:20:05.268293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.268336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.284126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:28:57.181 [2024-04-18 11:20:05.286021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.286065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.297470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:28:57.181 [2024-04-18 11:20:05.298917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.181 [2024-04-18 11:20:05.298974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:57.181 [2024-04-18 11:20:05.311671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:28:57.181 [2024-04-18 11:20:05.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.313250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.328915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:28:57.182 [2024-04-18 11:20:05.331281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.331322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.339058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:28:57.182 [2024-04-18 11:20:05.340012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.340058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.354081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:28:57.182 [2024-04-18 11:20:05.355361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.355417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.371159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:28:57.182 [2024-04-18 11:20:05.373351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.381163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:28:57.182 [2024-04-18 11:20:05.382218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.382258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:57.182 [2024-04-18 11:20:05.399915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:28:57.182 [2024-04-18 11:20:05.401939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.182 [2024-04-18 11:20:05.401998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.413461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:57.440 [2024-04-18 11:20:05.414766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.414819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.426789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:28:57.440 [2024-04-18 11:20:05.428241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.440035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:28:57.440 [2024-04-18 11:20:05.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.440909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.457384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6020 00:28:57.440 [2024-04-18 11:20:05.459623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.459665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.467261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:57.440 [2024-04-18 11:20:05.468358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.468398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.484130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:28:57.440 [2024-04-18 11:20:05.486049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.486091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.497110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:28:57.440 [2024-04-18 11:20:05.498489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.498529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.510236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:28:57.440 [2024-04-18 11:20:05.511563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.511602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.524979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:28:57.440 [2024-04-18 11:20:05.526995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.527036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.533747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:28:57.440 [2024-04-18 11:20:05.534812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.534850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.549612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:28:57.440 [2024-04-18 11:20:05.551612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.551654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.559994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:28:57.440 [2024-04-18 11:20:05.560886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.560925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.577508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:28:57.440 [2024-04-18 11:20:05.579174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.579227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:57.440 [2024-04-18 11:20:05.591262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:28:57.440 [2024-04-18 11:20:05.592447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.440 [2024-04-18 11:20:05.592504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:57.441 [2024-04-18 11:20:05.605889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:28:57.441 [2024-04-18 11:20:05.607270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.441 [2024-04-18 11:20:05.607328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:57.441 [2024-04-18 11:20:05.623754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:28:57.441 [2024-04-18 11:20:05.626039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.441 [2024-04-18 11:20:05.626083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:57.441 [2024-04-18 11:20:05.634391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:28:57.441 [2024-04-18 11:20:05.635355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.441 [2024-04-18 11:20:05.635399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:57.441 [2024-04-18 11:20:05.652075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:28:57.441 [2024-04-18 11:20:05.653874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.441 [2024-04-18 11:20:05.653949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.665898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:28:57.700 [2024-04-18 11:20:05.667223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.667266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.680388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:28:57.700 [2024-04-18 11:20:05.681834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.681880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.698461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:28:57.700 [2024-04-18 11:20:05.700832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.700895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.709747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:28:57.700 [2024-04-18 11:20:05.710932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.710984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.728784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:28:57.700 [2024-04-18 11:20:05.730758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.730803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.743157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:57.700 [2024-04-18 11:20:05.744681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.744727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.758739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:28:57.700 [2024-04-18 11:20:05.760447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.760515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.777712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:28:57.700 [2024-04-18 11:20:05.780207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.780253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.788826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:28:57.700 [2024-04-18 11:20:05.790079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.700 [2024-04-18 11:20:05.790148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:57.700 [2024-04-18 11:20:05.807903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:28:57.700 [2024-04-18 11:20:05.810021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.822195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:28:57.701 [2024-04-18 11:20:05.823849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.823899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.837453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:28:57.701 [2024-04-18 11:20:05.839189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.839238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.851941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:28:57.701 [2024-04-18 11:20:05.853224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.853272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.866931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:28:57.701 [2024-04-18 11:20:05.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.868299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.884881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:28:57.701 [2024-04-18 11:20:05.887023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.887066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.895534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:28:57.701 [2024-04-18 11:20:05.896455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.896506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:57.701 [2024-04-18 11:20:05.913494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:28:57.701 [2024-04-18 11:20:05.915262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.701 [2024-04-18 11:20:05.915307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:57.959 [2024-04-18 11:20:05.927308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:28:57.959 [2024-04-18 11:20:05.928555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.959 [2024-04-18 11:20:05.928604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:57.959 [2024-04-18 11:20:05.942252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:28:57.960 [2024-04-18 11:20:05.943670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:05.943711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:05.960372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:28:57.960 [2024-04-18 11:20:05.962622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:05.962672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:05.970531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:57.960 [2024-04-18 11:20:05.971588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:05.971644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:05.988709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:28:57.960 [2024-04-18 11:20:05.990723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:05.990768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.003195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:28:57.960 [2024-04-18 11:20:06.004613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.004659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.017506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:28:57.960 [2024-04-18 11:20:06.018963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.019005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.034285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:28:57.960 [2024-04-18 11:20:06.036752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.036811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.045284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:28:57.960 [2024-04-18 11:20:06.046414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.046454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.062574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:28:57.960 [2024-04-18 11:20:06.064520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.064564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.075408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:28:57.960 [2024-04-18 11:20:06.076738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.076804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.089345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:28:57.960 [2024-04-18 11:20:06.090528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.090570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.105261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:28:57.960 [2024-04-18 11:20:06.106961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.107026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.124686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:28:57.960 [2024-04-18 11:20:06.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.127237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.135659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:28:57.960 [2024-04-18 11:20:06.136954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.137014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.154170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:28:57.960 [2024-04-18 11:20:06.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.156335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.960 [2024-04-18 11:20:06.168546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:28:57.960 [2024-04-18 11:20:06.170073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.960 [2024-04-18 11:20:06.170130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.184048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:28:58.274 [2024-04-18 11:20:06.185828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.185873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.198257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:28:58.274 [2024-04-18 11:20:06.199414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.199459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.213573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:28:58.274 [2024-04-18 11:20:06.214599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.214666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.231201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:28:58.274 [2024-04-18 11:20:06.232663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.232719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.251128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:28:58.274 [2024-04-18 11:20:06.253406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.253459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.262810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:28:58.274 [2024-04-18 11:20:06.263782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.263830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.282320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:28:58.274 [2024-04-18 11:20:06.284299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.284357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.297262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:28:58.274 [2024-04-18 11:20:06.298636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.298683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.312707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:28:58.274 [2024-04-18 11:20:06.314210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.314258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.331940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:28:58.274 [2024-04-18 11:20:06.334314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.343326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:28:58.274 [2024-04-18 11:20:06.344520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.362410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0630 00:28:58.274 [2024-04-18 11:20:06.364384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.364447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.377337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:28:58.274 [2024-04-18 11:20:06.378893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.378946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.393150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:28:58.274 [2024-04-18 11:20:06.394716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.394765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.412014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:28:58.274 [2024-04-18 11:20:06.414455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.414513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.423239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:28:58.274 [2024-04-18 11:20:06.424419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.424467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.442204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:28:58.274 [2024-04-18 11:20:06.444323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.444374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.456976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:28:58.274 [2024-04-18 11:20:06.458582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.274 [2024-04-18 11:20:06.458635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:58.274 [2024-04-18 11:20:06.472914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:28:58.274 [2024-04-18 11:20:06.474591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.275 [2024-04-18 11:20:06.474648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.275 [2024-04-18 11:20:06.492271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:28:58.534 [2024-04-18 11:20:06.494885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.494938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.503712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:28:58.534 [2024-04-18 11:20:06.505146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.505192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.520209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:28:58.534 [2024-04-18 11:20:06.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.535496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:28:58.534 [2024-04-18 11:20:06.536658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.536709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.553189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:28:58.534 [2024-04-18 11:20:06.555118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.555182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.572246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:28:58.534 [2024-04-18 11:20:06.574711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.574767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.583978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:28:58.534 [2024-04-18 11:20:06.585320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.585369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.603330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:28:58.534 [2024-04-18 11:20:06.605648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.614882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:28:58.534 [2024-04-18 11:20:06.615786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.634089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:28:58.534 [2024-04-18 11:20:06.635884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.635935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.648821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:28:58.534 [2024-04-18 11:20:06.650270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.650323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.663048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:28:58.534 [2024-04-18 11:20:06.664004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.664050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.682207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:28:58.534 [2024-04-18 11:20:06.684084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.684143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.697195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:28:58.534 [2024-04-18 11:20:06.698592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.698654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.713098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:28:58.534 [2024-04-18 11:20:06.714768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.714813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.732525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:28:58.534 [2024-04-18 11:20:06.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.534 [2024-04-18 11:20:06.734962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:58.534 [2024-04-18 11:20:06.743754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:28:58.535 [2024-04-18 11:20:06.744866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.535 [2024-04-18 11:20:06.744914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.762788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:28:58.792 [2024-04-18 11:20:06.764755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.792 [2024-04-18 11:20:06.764807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.777528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:28:58.792 [2024-04-18 11:20:06.779058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.792 [2024-04-18 11:20:06.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.793242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:28:58.792 [2024-04-18 11:20:06.794795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.792 [2024-04-18 11:20:06.794843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.812308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:28:58.792 [2024-04-18 11:20:06.814842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.792 [2024-04-18 11:20:06.814892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.823756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:28:58.792 [2024-04-18 11:20:06.824926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.792 [2024-04-18 11:20:06.824972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.792 [2024-04-18 11:20:06.842642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:28:58.793 [2024-04-18 11:20:06.844770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.857400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:28:58.793 [2024-04-18 11:20:06.858972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.859020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.872808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:28:58.793 [2024-04-18 11:20:06.874679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.874726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.891648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:28:58.793 [2024-04-18 11:20:06.894276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.894326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.902918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0630 00:28:58.793 [2024-04-18 11:20:06.904286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.904332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.921972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:28:58.793 [2024-04-18 11:20:06.924224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.924275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.793 [2024-04-18 11:20:06.936944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:28:58.793 [2024-04-18 11:20:06.938797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.793 [2024-04-18 11:20:06.938845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.793 00:28:58.793 Latency(us) 00:28:58.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.793 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.793 nvme0n1 : 2.01 16602.71 64.85 0.00 0.00 7700.99 3291.69 21805.61 00:28:58.793 =================================================================================================================== 00:28:58.793 Total : 16602.71 64.85 0.00 0.00 7700.99 3291.69 21805.61 00:28:58.793 0 00:28:58.793 11:20:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:58.793 11:20:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:58.793 11:20:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:58.793 11:20:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:58.793 | .driver_specific 00:28:58.793 | .nvme_error 00:28:58.793 | .status_code 00:28:58.793 | .command_transient_transport_error' 00:28:59.358 11:20:07 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:28:59.358 11:20:07 -- host/digest.sh@73 -- # killprocess 88322 00:28:59.358 11:20:07 -- common/autotest_common.sh@936 -- # '[' -z 88322 ']' 00:28:59.358 11:20:07 -- common/autotest_common.sh@940 -- # kill -0 88322 00:28:59.358 11:20:07 -- common/autotest_common.sh@941 -- # uname 00:28:59.358 11:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:59.358 11:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88322 00:28:59.358 11:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:59.358 11:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:59.358 killing process with pid 88322 00:28:59.358 11:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88322' 00:28:59.358 11:20:07 -- common/autotest_common.sh@955 -- # kill 88322 00:28:59.358 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.358 00:28:59.358 Latency(us) 00:28:59.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.358 =================================================================================================================== 00:28:59.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.358 11:20:07 -- common/autotest_common.sh@960 -- # wait 88322 00:29:00.291 11:20:08 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:00.291 11:20:08 -- host/digest.sh@54 -- # local rw bs qd 00:29:00.291 11:20:08 -- host/digest.sh@56 -- # rw=randwrite 00:29:00.291 11:20:08 -- host/digest.sh@56 -- # bs=131072 00:29:00.291 11:20:08 -- host/digest.sh@56 -- # qd=16 00:29:00.291 11:20:08 -- host/digest.sh@58 -- # bperfpid=88420 00:29:00.291 11:20:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:00.291 11:20:08 -- host/digest.sh@60 -- # waitforlisten 88420 /var/tmp/bperf.sock 00:29:00.291 11:20:08 -- common/autotest_common.sh@817 -- # '[' -z 88420 ']' 00:29:00.291 11:20:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.291 11:20:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:00.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.291 11:20:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.291 11:20:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:00.291 11:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:00.291 [2024-04-18 11:20:08.430585] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:00.291 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.291 Zero copy mechanism will not be used. 00:29:00.291 [2024-04-18 11:20:08.430792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88420 ] 00:29:00.549 [2024-04-18 11:20:08.607559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.807 [2024-04-18 11:20:08.869918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.424 11:20:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:01.424 11:20:09 -- common/autotest_common.sh@850 -- # return 0 00:29:01.424 11:20:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.424 11:20:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.682 11:20:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:01.682 11:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.682 11:20:09 -- common/autotest_common.sh@10 -- # set +x 00:29:01.682 11:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.682 11:20:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.682 11:20:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.248 nvme0n1 00:29:02.248 11:20:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:02.248 11:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.248 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:29:02.248 11:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.248 11:20:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:02.248 11:20:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.248 Zero copy mechanism will not be used. 00:29:02.248 Running I/O for 2 seconds... 00:29:02.248 [2024-04-18 11:20:10.397156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.248 [2024-04-18 11:20:10.397542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-04-18 11:20:10.397587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.248 [2024-04-18 11:20:10.404413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.404793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.411481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.411832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.411879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.418546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.418898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.418949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.425360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.425711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.425754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.432280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.432663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.432707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.439366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.446302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.446651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.446693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.453150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.453478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.453519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.459957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.460299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.460336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.249 [2024-04-18 11:20:10.466417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.249 [2024-04-18 11:20:10.466745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-04-18 11:20:10.466789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.472917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.473248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.473307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.479391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.479668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.479710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.485497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.485791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.485846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.491910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.492201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.492233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.498150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.498501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.498550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.504584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.504870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.504915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.511060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.511374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.511405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.517617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.517940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.524344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.524645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.524678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.530943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.531298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.531332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.537516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.537848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.537880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.544137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.544432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.544466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.550773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.551097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.557514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.557839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.557871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.564276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.564625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.564664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.570809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.571189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.577276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.577550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.577606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.583666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.583944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.590089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.590416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.590461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.596752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.597144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.603494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.603814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.610004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.610310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.610343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.616459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.616750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.616783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.623042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.623372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.623403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.508 [2024-04-18 11:20:10.629379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.508 [2024-04-18 11:20:10.629718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.508 [2024-04-18 11:20:10.629760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.635678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.635953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.635997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.641889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.642187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.642229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.648019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.648308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.648341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.654241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.654546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.654587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.660544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.660823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.660864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.666815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.667141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.667183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.673066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.673366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.673407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.679084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.679365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.685503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.685834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.685865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.691861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.692247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.692286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.698146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.698454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.698501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.704296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.704588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.704621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.710497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.710782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.716872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.717163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.717224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.509 [2024-04-18 11:20:10.723160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.509 [2024-04-18 11:20:10.723523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-04-18 11:20:10.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.729213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.729516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.729554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.735493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.735843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.741827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.742121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.742153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.748095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.748395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.748439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.754317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.754626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.760394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.760684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.760725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.766447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.766777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.766809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.772736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.773031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.773066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.779098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.779448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.785395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.785710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.785754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.791543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.791814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.791847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.797914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.798204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.798237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.804116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.804536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.810358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.810674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.810719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.816801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.817086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.817150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.822915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.823206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.823255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.829203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.835471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.835743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.835776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.841544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.841819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.841859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.847730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.848002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.848038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.853837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.854135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.854179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.859969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.860254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.860297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.866065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.866365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.866415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.872263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.872587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.878484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.878816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.878858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.884947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.885237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.885289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.891300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.891581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.891613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.897445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.897730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.897772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.903552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.903868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.903910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.909872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.910163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.910196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.916248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.916530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.916563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.922585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.922866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.922905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.928849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.929177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.929226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.935580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.935870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.935915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.941706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.941978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.942025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.947776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.948097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.954005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.954376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.954425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.960586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.960871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.960913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.967025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.967334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.768 [2024-04-18 11:20:10.967387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.768 [2024-04-18 11:20:10.973569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.768 [2024-04-18 11:20:10.973876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.769 [2024-04-18 11:20:10.973924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.769 [2024-04-18 11:20:10.979762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.769 [2024-04-18 11:20:10.980059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.769 [2024-04-18 11:20:10.980098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.769 [2024-04-18 11:20:10.986082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:02.769 [2024-04-18 11:20:10.986398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.769 [2024-04-18 11:20:10.986446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.028 [2024-04-18 11:20:10.992135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.028 [2024-04-18 11:20:10.992452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.028 [2024-04-18 11:20:10.992524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.028 [2024-04-18 11:20:10.998192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.028 [2024-04-18 11:20:10.998466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.028 [2024-04-18 11:20:10.998507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.028 [2024-04-18 11:20:11.004289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.028 [2024-04-18 11:20:11.004578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.028 [2024-04-18 11:20:11.004624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.028 [2024-04-18 11:20:11.010715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.028 [2024-04-18 11:20:11.010993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.011035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.017054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.017349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.023093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.023387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.029155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.029437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.035254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.035525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.035569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.041378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.041675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.041716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.047602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.047882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.047925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.053823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.054128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.060142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.060474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.066389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.066668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.066709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.072553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.072835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.072880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.078598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.078932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.078994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.084923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.085290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.091127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.091414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.091445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.097251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.097554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.097596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.103544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.103867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.103958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.110132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.110553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.116432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.122591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.122870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.122913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.128869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.129163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.129204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.135311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.135594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.135644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.141709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.141988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.147800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.148088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.148152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.153987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.154300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.154369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.160151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.160547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.160610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.166408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.166685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.166727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.172607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.172897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.029 [2024-04-18 11:20:11.172950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.029 [2024-04-18 11:20:11.178883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.029 [2024-04-18 11:20:11.179186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.179229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.185348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.185626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.185668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.191513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.191791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.191843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.197622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.197906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.197942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.203627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.203903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.203949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.209686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.209973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.210063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.215707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.215995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.216036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.221706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.221989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.222030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.227710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.227992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.228036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.233721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.233998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.234074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.239713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.240009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.240050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.030 [2024-04-18 11:20:11.245813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.030 [2024-04-18 11:20:11.246092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.030 [2024-04-18 11:20:11.246151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.251799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.252077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.252129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.258098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.258417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.264644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.264976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.270712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.270990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.271032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.276889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.277209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.277243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.283089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.283425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.283499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.289209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.289519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.289560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.295309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.295619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.295680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.301398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.301686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.301727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.307526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.307804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.307847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.313776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.314096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.314161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.289 [2024-04-18 11:20:11.319969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.289 [2024-04-18 11:20:11.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.289 [2024-04-18 11:20:11.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.326161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.326487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.326529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.332246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.332538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.332580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.338344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.338650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.338692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.344439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.344738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.344780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.350473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.350782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.356642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.356919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.356977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.362858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.363149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.363216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.368993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.369294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.369335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.375020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.375330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.375373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.381211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.381569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.381618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.387482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.387809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.387852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.393737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.394016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.394058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.399883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.400193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.400231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.406273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.406593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.412605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.412894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.412936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.418728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.419011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.419053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.424782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.425070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.425155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.430908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.431228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.431283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.437055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.437371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.437431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.443388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.443690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.443723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.449782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.450061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.450120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.455861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.456198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.461908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.462219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.462260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.290 [2024-04-18 11:20:11.468075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.290 [2024-04-18 11:20:11.468386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.290 [2024-04-18 11:20:11.468434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.474236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.474515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.474555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.480246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.480545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.480586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.486367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.486657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.486698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.492685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.492961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.493001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.498881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.499179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.499214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.291 [2024-04-18 11:20:11.505066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.291 [2024-04-18 11:20:11.505372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.291 [2024-04-18 11:20:11.505413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.511050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.511343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.511383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.517201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.517509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.517551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.523208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.523494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.523553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.529290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.529605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.535266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.535556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.535598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.541439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.541738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.541781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.547492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.547767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.547816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.553540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.553823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.553876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.559558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.559846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.559888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.565707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.565992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.566052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.571789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.572084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.572147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.577825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.578138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.578182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.583900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.549 [2024-04-18 11:20:11.584205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.549 [2024-04-18 11:20:11.584257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.549 [2024-04-18 11:20:11.589940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.590287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.596073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.596371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.596415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.602204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.602528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.608140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.608420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.608463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.614299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.614587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.614640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.620418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.620711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.620767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.626498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.626782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.626837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.632636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.632914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.632957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.638857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.639144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.639201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.644900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.645186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.645230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.651010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.651320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.657100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.657471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.657521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.663263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.663563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.663609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.669407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.669693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.675512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.675798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.681657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.681957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.682005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.687746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.688059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.688123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.693868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.694169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.694213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.699868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.700161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.700210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.705997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.706338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.712188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.712474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.712534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.718280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.718561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.718622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.724361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.724655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.724688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.730392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.730673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.730717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.736461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.736758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.736801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.742600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.742883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.742931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.550 [2024-04-18 11:20:11.748736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.550 [2024-04-18 11:20:11.749021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.550 [2024-04-18 11:20:11.749075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.551 [2024-04-18 11:20:11.754956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.551 [2024-04-18 11:20:11.755298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.551 [2024-04-18 11:20:11.755361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.551 [2024-04-18 11:20:11.761165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.551 [2024-04-18 11:20:11.761444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.551 [2024-04-18 11:20:11.761487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.551 [2024-04-18 11:20:11.767203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.551 [2024-04-18 11:20:11.767482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.551 [2024-04-18 11:20:11.767524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.773151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.773460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.773506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.779140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.779430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.779476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.785188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.785476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.785524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.791184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.791481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.791527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.797338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.797640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.797696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.803540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.803926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.809531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.809949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.815574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.815891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.815950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.821648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.821913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.821963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.827637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.827909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.827952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.833723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.834014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.834048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.839747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.840022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.840065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.845858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.846142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.846196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.851968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.852316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.858089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.858389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.858436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.864211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.864571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.870480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.870794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.876803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.877098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.877157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.883067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.883372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.883413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.889316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.889606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.889665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.895559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.895854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.895903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.901857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.902166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.902213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.908077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.908390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.908456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.914329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.914624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.914674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.920537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.810 [2024-04-18 11:20:11.920839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.810 [2024-04-18 11:20:11.920889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.810 [2024-04-18 11:20:11.926779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.927101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.927163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.933207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.933549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.933598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.939609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.939910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.939962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.946160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.952474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.952770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.952833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.958665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.958978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.959035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.965062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.965397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.965445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.971318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.971613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.971662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.977655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.977959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.978013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.983995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.984310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.984359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.990364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.990662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.990710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:11.996762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:11.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:11.997135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:12.003107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:12.003438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:12.003494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:12.009520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:12.009831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:12.009896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:12.015961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:12.016328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:12.016392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:12.022303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:12.022641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:12.022701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.811 [2024-04-18 11:20:12.028648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:03.811 [2024-04-18 11:20:12.028931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.811 [2024-04-18 11:20:12.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.070 [2024-04-18 11:20:12.035030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.070 [2024-04-18 11:20:12.035443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.070 [2024-04-18 11:20:12.035492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.041368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.041660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.041717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.047666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.047956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.048004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.053955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.054263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.054312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.060208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.060572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.060615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.066682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.066969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.067026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.073060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.073361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.073410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.079513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.079811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.079858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.085817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.086119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.086168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.092258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.092551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.092599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.098663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.098987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.099044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.105087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.105401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.105459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.111391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.111681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.111742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.117790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.118086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.118149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.124194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.124487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.124633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.130494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.130788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.130837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.136790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.137144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.142946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.143330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.149270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.149630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.155655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.155939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.156001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.161987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.162318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.162368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.168295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.168595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.168646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.174809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.175129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.175209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.181098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.181408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.181455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.187549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.187845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.187894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.194012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.194324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.200564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.200843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.200892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.071 [2024-04-18 11:20:12.206882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.071 [2024-04-18 11:20:12.207220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.071 [2024-04-18 11:20:12.207267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.213378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.213704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.213762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.219906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.220206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.220254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.226206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.226502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.226550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.232437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.232784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.238777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.245092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.245461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.245517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.251393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.251689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.251737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.257794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.258158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.264268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.264563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.264612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.270450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.270729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.270778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.276709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.277069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.277131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.283309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.283589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.283647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.072 [2024-04-18 11:20:12.289769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.072 [2024-04-18 11:20:12.290051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.072 [2024-04-18 11:20:12.290100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.296040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.296401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.296457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.302534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.302856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.302913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.308922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.309231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.309280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.315305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.315584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.315642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.321604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.321912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.321972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.327975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.328276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.328318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.334404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.334713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.334762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.340868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.341178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.341227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.347216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.347559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.347607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.353553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.353847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.353896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.359950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.360253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.360302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.366482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.366788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.366837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.372763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.373160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.373230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.331 [2024-04-18 11:20:12.379147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:04.331 [2024-04-18 11:20:12.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.331 [2024-04-18 11:20:12.379484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.331 00:29:04.331 Latency(us) 00:29:04.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.331 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:04.331 nvme0n1 : 2.00 4918.13 614.77 0.00 0.00 3244.49 2621.44 13524.25 00:29:04.331 =================================================================================================================== 00:29:04.331 Total : 4918.13 614.77 0.00 0.00 3244.49 2621.44 13524.25 00:29:04.331 0 00:29:04.331 11:20:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:04.331 11:20:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:04.331 11:20:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:04.331 | .driver_specific 00:29:04.331 | .nvme_error 00:29:04.331 | .status_code 00:29:04.331 | .command_transient_transport_error' 00:29:04.331 11:20:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:04.590 11:20:12 -- host/digest.sh@71 -- # (( 317 > 0 )) 00:29:04.590 11:20:12 -- host/digest.sh@73 -- # killprocess 88420 00:29:04.590 11:20:12 -- common/autotest_common.sh@936 -- # '[' -z 88420 ']' 00:29:04.590 11:20:12 -- common/autotest_common.sh@940 -- # kill -0 88420 00:29:04.590 11:20:12 -- common/autotest_common.sh@941 -- # uname 00:29:04.590 11:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:04.590 11:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88420 00:29:04.590 11:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:04.590 11:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:04.590 killing process with pid 88420 00:29:04.590 11:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88420' 00:29:04.590 11:20:12 -- common/autotest_common.sh@955 -- # kill 88420 00:29:04.590 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.590 00:29:04.590 Latency(us) 00:29:04.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.590 =================================================================================================================== 00:29:04.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.590 11:20:12 -- common/autotest_common.sh@960 -- # wait 88420 00:29:05.963 11:20:13 -- host/digest.sh@116 -- # killprocess 88074 00:29:05.963 11:20:13 -- common/autotest_common.sh@936 -- # '[' -z 88074 ']' 00:29:05.963 11:20:13 -- common/autotest_common.sh@940 -- # kill -0 88074 00:29:05.963 11:20:13 -- common/autotest_common.sh@941 -- # uname 00:29:05.963 11:20:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:05.963 11:20:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88074 00:29:05.963 11:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:05.963 11:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:05.963 killing process with pid 88074 00:29:05.963 11:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88074' 00:29:05.963 11:20:13 -- common/autotest_common.sh@955 -- # kill 88074 00:29:05.963 11:20:13 -- common/autotest_common.sh@960 -- # wait 88074 00:29:07.338 ************************************ 00:29:07.338 END TEST nvmf_digest_error 00:29:07.338 ************************************ 00:29:07.338 00:29:07.338 real 0m24.200s 00:29:07.338 user 0m46.099s 00:29:07.338 sys 0m5.128s 00:29:07.338 11:20:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.338 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:29:07.338 11:20:15 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:07.338 11:20:15 -- host/digest.sh@150 -- # nvmftestfini 00:29:07.338 11:20:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:07.338 11:20:15 -- nvmf/common.sh@117 -- # sync 00:29:07.338 11:20:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.338 11:20:15 -- nvmf/common.sh@120 -- # set +e 00:29:07.338 11:20:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.338 11:20:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.338 rmmod nvme_tcp 00:29:07.338 rmmod nvme_fabrics 00:29:07.338 rmmod nvme_keyring 00:29:07.338 11:20:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.338 11:20:15 -- nvmf/common.sh@124 -- # set -e 00:29:07.338 11:20:15 -- nvmf/common.sh@125 -- # return 0 00:29:07.338 11:20:15 -- nvmf/common.sh@478 -- # '[' -n 88074 ']' 00:29:07.338 11:20:15 -- nvmf/common.sh@479 -- # killprocess 88074 00:29:07.338 11:20:15 -- common/autotest_common.sh@936 -- # '[' -z 88074 ']' 00:29:07.338 11:20:15 -- common/autotest_common.sh@940 -- # kill -0 88074 00:29:07.338 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (88074) - No such process 00:29:07.338 11:20:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 88074 is not found' 00:29:07.338 Process with pid 88074 is not found 00:29:07.338 11:20:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:07.338 11:20:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:07.338 11:20:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:07.338 11:20:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.338 11:20:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:07.338 11:20:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.338 11:20:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.338 11:20:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.338 11:20:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:07.338 00:29:07.338 real 0m49.341s 00:29:07.338 user 1m32.094s 00:29:07.338 sys 0m10.373s 00:29:07.338 11:20:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.338 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:29:07.338 ************************************ 00:29:07.338 END TEST nvmf_digest 00:29:07.338 ************************************ 00:29:07.338 11:20:15 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:29:07.338 11:20:15 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:29:07.339 11:20:15 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:07.339 11:20:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:07.339 11:20:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.339 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:29:07.339 ************************************ 00:29:07.339 START TEST nvmf_mdns_discovery 00:29:07.339 ************************************ 00:29:07.339 11:20:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:07.339 * Looking for test storage... 00:29:07.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:07.339 11:20:15 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:07.339 11:20:15 -- nvmf/common.sh@7 -- # uname -s 00:29:07.339 11:20:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.339 11:20:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.339 11:20:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.765 11:20:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.765 11:20:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.765 11:20:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.765 11:20:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.765 11:20:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.765 11:20:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.765 11:20:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.765 11:20:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:29:07.765 11:20:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:29:07.765 11:20:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.765 11:20:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.765 11:20:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:07.765 11:20:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.765 11:20:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:07.765 11:20:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.765 11:20:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.765 11:20:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.765 11:20:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.765 11:20:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.765 11:20:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.765 11:20:15 -- paths/export.sh@5 -- # export PATH 00:29:07.765 11:20:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.765 11:20:15 -- nvmf/common.sh@47 -- # : 0 00:29:07.765 11:20:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.765 11:20:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.765 11:20:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.765 11:20:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.765 11:20:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.765 11:20:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.765 11:20:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.765 11:20:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:29:07.765 11:20:15 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:29:07.765 11:20:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:07.765 11:20:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.765 11:20:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:07.765 11:20:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:07.765 11:20:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:07.765 11:20:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.765 11:20:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.765 11:20:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.765 11:20:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:07.765 11:20:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:07.765 11:20:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:07.765 11:20:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:07.765 11:20:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:07.765 11:20:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:07.765 11:20:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.765 11:20:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.765 11:20:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:07.765 11:20:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:07.765 11:20:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:07.765 11:20:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:07.765 11:20:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:07.765 11:20:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.765 11:20:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:07.765 11:20:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:07.765 11:20:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:07.765 11:20:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:07.765 11:20:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:07.766 11:20:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:07.766 Cannot find device "nvmf_tgt_br" 00:29:07.766 11:20:15 -- nvmf/common.sh@155 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:07.766 Cannot find device "nvmf_tgt_br2" 00:29:07.766 11:20:15 -- nvmf/common.sh@156 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:07.766 11:20:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:07.766 Cannot find device "nvmf_tgt_br" 00:29:07.766 11:20:15 -- nvmf/common.sh@158 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:07.766 Cannot find device "nvmf_tgt_br2" 00:29:07.766 11:20:15 -- nvmf/common.sh@159 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:07.766 11:20:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:07.766 11:20:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:07.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.766 11:20:15 -- nvmf/common.sh@162 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:07.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.766 11:20:15 -- nvmf/common.sh@163 -- # true 00:29:07.766 11:20:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:07.766 11:20:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:07.766 11:20:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:07.766 11:20:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:07.766 11:20:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:07.766 11:20:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:07.766 11:20:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:07.766 11:20:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:07.766 11:20:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:07.766 11:20:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:07.766 11:20:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:07.766 11:20:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:07.766 11:20:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:07.766 11:20:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:07.766 11:20:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:07.766 11:20:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:07.766 11:20:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:07.766 11:20:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:07.766 11:20:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:07.766 11:20:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:07.766 11:20:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:07.766 11:20:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:07.766 11:20:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:07.766 11:20:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:07.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:29:07.766 00:29:07.766 --- 10.0.0.2 ping statistics --- 00:29:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.766 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:07.766 11:20:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:07.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:07.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:29:07.766 00:29:07.766 --- 10.0.0.3 ping statistics --- 00:29:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.766 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:07.766 11:20:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:07.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:29:07.766 00:29:07.766 --- 10.0.0.1 ping statistics --- 00:29:07.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.766 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:07.766 11:20:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.766 11:20:15 -- nvmf/common.sh@422 -- # return 0 00:29:07.766 11:20:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:07.766 11:20:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.766 11:20:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:07.766 11:20:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:07.766 11:20:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.766 11:20:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:07.766 11:20:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:07.766 11:20:15 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:07.766 11:20:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:07.766 11:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:07.766 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:29:07.766 11:20:15 -- nvmf/common.sh@470 -- # nvmfpid=88752 00:29:07.766 11:20:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:07.766 11:20:15 -- nvmf/common.sh@471 -- # waitforlisten 88752 00:29:07.766 11:20:15 -- common/autotest_common.sh@817 -- # '[' -z 88752 ']' 00:29:07.766 11:20:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.766 11:20:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:07.766 11:20:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.766 11:20:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:07.766 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:29:08.023 [2024-04-18 11:20:16.102633] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:08.023 [2024-04-18 11:20:16.102854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.325 [2024-04-18 11:20:16.285174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.610 [2024-04-18 11:20:16.609191] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.610 [2024-04-18 11:20:16.609284] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.610 [2024-04-18 11:20:16.609311] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.610 [2024-04-18 11:20:16.609345] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.610 [2024-04-18 11:20:16.609366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.610 [2024-04-18 11:20:16.609423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.867 11:20:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:08.867 11:20:17 -- common/autotest_common.sh@850 -- # return 0 00:29:08.867 11:20:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:08.867 11:20:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:08.867 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.125 11:20:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.125 11:20:17 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:29:09.125 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.125 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.125 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.125 11:20:17 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:29:09.125 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.125 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.383 11:20:17 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.383 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.383 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 [2024-04-18 11:20:17.485323] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.383 11:20:17 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:09.383 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.383 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 [2024-04-18 11:20:17.493540] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.383 11:20:17 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:09.383 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.383 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 null0 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.383 11:20:17 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:09.383 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.383 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 null1 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.383 11:20:17 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:29:09.383 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.383 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.383 null2 00:29:09.383 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.384 11:20:17 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:29:09.384 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.384 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.384 null3 00:29:09.384 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.384 11:20:17 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:29:09.384 11:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.384 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.384 11:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.384 11:20:17 -- host/mdns_discovery.sh@47 -- # hostpid=88808 00:29:09.384 11:20:17 -- host/mdns_discovery.sh@48 -- # waitforlisten 88808 /tmp/host.sock 00:29:09.384 11:20:17 -- common/autotest_common.sh@817 -- # '[' -z 88808 ']' 00:29:09.384 11:20:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:09.384 11:20:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.384 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:09.384 11:20:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:09.384 11:20:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.384 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:29:09.384 11:20:17 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:09.643 [2024-04-18 11:20:17.644917] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:09.643 [2024-04-18 11:20:17.645093] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88808 ] 00:29:09.643 [2024-04-18 11:20:17.815044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.900 [2024-04-18 11:20:18.120690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.465 11:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:10.465 11:20:18 -- common/autotest_common.sh@850 -- # return 0 00:29:10.465 11:20:18 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:29:10.465 11:20:18 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:29:10.465 11:20:18 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:29:10.722 11:20:18 -- host/mdns_discovery.sh@57 -- # avahipid=88837 00:29:10.722 11:20:18 -- host/mdns_discovery.sh@58 -- # sleep 1 00:29:10.722 11:20:18 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:29:10.722 11:20:18 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:29:10.722 Process 1004 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:29:10.722 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:29:10.722 Successfully dropped root privileges. 00:29:10.722 avahi-daemon 0.8 starting up. 00:29:10.722 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:29:10.722 Successfully called chroot(). 00:29:10.722 Successfully dropped remaining capabilities. 00:29:10.722 No service file found in /etc/avahi/services. 00:29:11.656 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:11.656 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:29:11.656 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:11.656 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:29:11.656 Network interface enumeration completed. 00:29:11.656 Registering new address record for fe80::7880:2bff:fe64:65fa on nvmf_tgt_if2.*. 00:29:11.656 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:29:11.656 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:29:11.656 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:29:11.656 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3004832198. 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:11.656 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.656 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.656 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:11.656 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.656 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.656 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:11.656 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:11.656 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@68 -- # xargs 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@68 -- # sort 00:29:11.656 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@64 -- # sort 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:11.656 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.656 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.656 11:20:19 -- host/mdns_discovery.sh@64 -- # xargs 00:29:11.656 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:11.915 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:11.915 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@68 -- # sort 00:29:11.915 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@68 -- # xargs 00:29:11.915 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@64 -- # sort 00:29:11.915 11:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:19 -- host/mdns_discovery.sh@64 -- # xargs 00:29:11.915 11:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:11.915 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:11.915 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@68 -- # xargs 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@68 -- # sort 00:29:11.915 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.915 [2024-04-18 11:20:20.089049] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@64 -- # sort 00:29:11.915 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:11.915 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:11.915 11:20:20 -- host/mdns_discovery.sh@64 -- # xargs 00:29:11.915 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.173 11:20:20 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:29:12.173 11:20:20 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:12.173 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.173 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.173 [2024-04-18 11:20:20.186670] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 [2024-04-18 11:20:20.226529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:12.174 11:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.174 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 [2024-04-18 11:20:20.238518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:12.174 11:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88888 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@125 -- # sleep 5 00:29:12.174 11:20:20 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:29:13.108 [2024-04-18 11:20:20.989045] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:13.108 Established under name 'CDC' 00:29:13.367 [2024-04-18 11:20:21.389101] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:13.367 [2024-04-18 11:20:21.389199] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:29:13.367 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:13.367 cookie is 0 00:29:13.367 is_local: 1 00:29:13.367 our_own: 0 00:29:13.367 wide_area: 0 00:29:13.367 multicast: 1 00:29:13.367 cached: 1 00:29:13.367 [2024-04-18 11:20:21.489084] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:13.367 [2024-04-18 11:20:21.489163] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:29:13.367 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:13.367 cookie is 0 00:29:13.367 is_local: 1 00:29:13.367 our_own: 0 00:29:13.367 wide_area: 0 00:29:13.367 multicast: 1 00:29:13.367 cached: 1 00:29:14.301 [2024-04-18 11:20:22.404364] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:14.301 [2024-04-18 11:20:22.404438] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:14.301 [2024-04-18 11:20:22.404474] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:14.301 [2024-04-18 11:20:22.502392] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:29:14.301 [2024-04-18 11:20:22.503306] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:14.301 [2024-04-18 11:20:22.503340] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:14.301 [2024-04-18 11:20:22.503386] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:14.560 [2024-04-18 11:20:22.564804] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:14.560 [2024-04-18 11:20:22.564873] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:14.560 [2024-04-18 11:20:22.589352] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:29:14.560 [2024-04-18 11:20:22.654359] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:14.560 [2024-04-18 11:20:22.654414] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:17.089 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.089 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@80 -- # xargs 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@80 -- # sort 00:29:17.089 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:17.089 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.089 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@76 -- # xargs 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:17.089 11:20:25 -- host/mdns_discovery.sh@76 -- # sort 00:29:17.347 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:17.347 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@68 -- # sort 00:29:17.347 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@68 -- # xargs 00:29:17.347 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:17.347 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@64 -- # sort 00:29:17.347 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@64 -- # xargs 00:29:17.347 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:17.347 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.347 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # xargs 00:29:17.347 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:17.347 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.347 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:17.347 11:20:25 -- host/mdns_discovery.sh@72 -- # xargs 00:29:17.348 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:17.607 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:17.607 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.607 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:17.607 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.607 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.607 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:29:17.607 11:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.607 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.607 11:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.607 11:20:25 -- host/mdns_discovery.sh@139 -- # sleep 1 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.542 11:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.542 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@64 -- # sort 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@64 -- # xargs 00:29:18.542 11:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:18.542 11:20:26 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:18.542 11:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.542 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.542 11:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:18.800 11:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.800 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.800 [2024-04-18 11:20:26.793691] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:18.800 [2024-04-18 11:20:26.794507] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:18.800 [2024-04-18 11:20:26.794584] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:18.800 [2024-04-18 11:20:26.794665] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:18.800 [2024-04-18 11:20:26.794697] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:18.800 11:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:29:18.800 11:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.800 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.800 [2024-04-18 11:20:26.801383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:18.800 [2024-04-18 11:20:26.802478] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:18.800 [2024-04-18 11:20:26.802641] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:18.800 11:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.800 11:20:26 -- host/mdns_discovery.sh@149 -- # sleep 1 00:29:18.800 [2024-04-18 11:20:26.932799] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:29:18.800 [2024-04-18 11:20:26.933765] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:29:18.801 [2024-04-18 11:20:26.998345] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:18.801 [2024-04-18 11:20:26.998385] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:18.801 [2024-04-18 11:20:26.998413] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:18.801 [2024-04-18 11:20:26.998450] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:18.801 [2024-04-18 11:20:26.998557] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:18.801 [2024-04-18 11:20:26.998582] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:18.801 [2024-04-18 11:20:26.998593] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:18.801 [2024-04-18 11:20:26.998624] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:19.059 [2024-04-18 11:20:27.044034] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:19.059 [2024-04-18 11:20:27.044068] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:19.059 [2024-04-18 11:20:27.045014] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:19.059 [2024-04-18 11:20:27.045044] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:19.625 11:20:27 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:29:19.625 11:20:27 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:19.625 11:20:27 -- host/mdns_discovery.sh@68 -- # sort 00:29:19.625 11:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.625 11:20:27 -- common/autotest_common.sh@10 -- # set +x 00:29:19.625 11:20:27 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:19.625 11:20:27 -- host/mdns_discovery.sh@68 -- # xargs 00:29:19.625 11:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@64 -- # sort 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@64 -- # xargs 00:29:19.883 11:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.883 11:20:27 -- common/autotest_common.sh@10 -- # set +x 00:29:19.883 11:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:19.883 11:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:19.883 11:20:27 -- host/mdns_discovery.sh@72 -- # xargs 00:29:19.883 11:20:27 -- common/autotest_common.sh@10 -- # set +x 00:29:19.883 11:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:19.883 11:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:19.883 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@72 -- # xargs 00:29:19.883 11:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:19.883 11:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.883 11:20:28 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:19.883 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:29:19.883 11:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:20.145 11:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.145 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:29:20.145 [2024-04-18 11:20:28.125936] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:20.145 [2024-04-18 11:20:28.126027] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:20.145 [2024-04-18 11:20:28.126100] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:20.145 [2024-04-18 11:20:28.126154] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:20.145 11:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:20.145 11:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.145 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:29:20.145 [2024-04-18 11:20:28.133660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.133733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.133761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.133778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.133796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.133812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.133829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.133846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.133862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.145 [2024-04-18 11:20:28.134121] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:20.145 [2024-04-18 11:20:28.134286] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:20.145 11:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.145 11:20:28 -- host/mdns_discovery.sh@162 -- # sleep 1 00:29:20.145 [2024-04-18 11:20:28.140140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.140184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.140207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.140225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.140242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.140258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.140276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.145 [2024-04-18 11:20:28.140291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.145 [2024-04-18 11:20:28.140307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.145 [2024-04-18 11:20:28.143587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.145 [2024-04-18 11:20:28.150062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.145 [2024-04-18 11:20:28.153643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.145 [2024-04-18 11:20:28.153898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.145 [2024-04-18 11:20:28.153989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.145 [2024-04-18 11:20:28.154033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.145 [2024-04-18 11:20:28.154057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.145 [2024-04-18 11:20:28.154127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.145 [2024-04-18 11:20:28.154188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.145 [2024-04-18 11:20:28.154211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.145 [2024-04-18 11:20:28.154231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.145 [2024-04-18 11:20:28.154271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.145 [2024-04-18 11:20:28.160082] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.145 [2024-04-18 11:20:28.160298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.145 [2024-04-18 11:20:28.160398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.145 [2024-04-18 11:20:28.160442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.145 [2024-04-18 11:20:28.160466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.145 [2024-04-18 11:20:28.160547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.145 [2024-04-18 11:20:28.160600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.145 [2024-04-18 11:20:28.160621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.145 [2024-04-18 11:20:28.160639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.146 [2024-04-18 11:20:28.160684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.163801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.146 [2024-04-18 11:20:28.163939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.164011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.164041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.146 [2024-04-18 11:20:28.164062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.164094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.164142] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.164162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.164179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.146 [2024-04-18 11:20:28.164208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.170248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.146 [2024-04-18 11:20:28.170438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.170513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.170552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.146 [2024-04-18 11:20:28.170574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.170636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.170667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.170686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.170703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.146 [2024-04-18 11:20:28.170731] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.173909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.146 [2024-04-18 11:20:28.174058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.174143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.174174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.146 [2024-04-18 11:20:28.174194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.174225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.174259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.174277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.174294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.146 [2024-04-18 11:20:28.174322] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.180402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.146 [2024-04-18 11:20:28.180659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.180734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.180777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.146 [2024-04-18 11:20:28.180800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.180850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.180880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.180898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.180915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.146 [2024-04-18 11:20:28.180944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.184018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.146 [2024-04-18 11:20:28.184224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.184300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.184329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.146 [2024-04-18 11:20:28.184349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.184381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.184408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.184425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.184463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.146 [2024-04-18 11:20:28.184539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.190521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.146 [2024-04-18 11:20:28.190665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.190737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.190767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.146 [2024-04-18 11:20:28.190787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.190820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.190847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.190864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.190880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.146 [2024-04-18 11:20:28.190907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.194179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.146 [2024-04-18 11:20:28.194307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.194377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.194406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.146 [2024-04-18 11:20:28.194426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.194456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.194510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.194531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.194548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.146 [2024-04-18 11:20:28.194575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.200630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.146 [2024-04-18 11:20:28.200785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.200863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.200905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.146 [2024-04-18 11:20:28.200926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.200956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.200983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.200999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.201016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.146 [2024-04-18 11:20:28.201043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.146 [2024-04-18 11:20:28.204271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.146 [2024-04-18 11:20:28.204394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.204463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.146 [2024-04-18 11:20:28.204500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.146 [2024-04-18 11:20:28.204531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.146 [2024-04-18 11:20:28.204562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.146 [2024-04-18 11:20:28.204615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.146 [2024-04-18 11:20:28.204636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.146 [2024-04-18 11:20:28.204653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.146 [2024-04-18 11:20:28.204696] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.210739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.147 [2024-04-18 11:20:28.210888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.210961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.210992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.147 [2024-04-18 11:20:28.211011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.211040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.211066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.211083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.211099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.147 [2024-04-18 11:20:28.211158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.214376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.147 [2024-04-18 11:20:28.214505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.214575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.214605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.147 [2024-04-18 11:20:28.214623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.214662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.214716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.214737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.214753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.147 [2024-04-18 11:20:28.214780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.220894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.147 [2024-04-18 11:20:28.221030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.221115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.221147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.147 [2024-04-18 11:20:28.221167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.221198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.221225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.221241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.221257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.147 [2024-04-18 11:20:28.221286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.224473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.147 [2024-04-18 11:20:28.224653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.224729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.224758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.147 [2024-04-18 11:20:28.224778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.224808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.224921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.224946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.224963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.147 [2024-04-18 11:20:28.224991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.230994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.147 [2024-04-18 11:20:28.231163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.231237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.231269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.147 [2024-04-18 11:20:28.231290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.231323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.231350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.231366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.231383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.147 [2024-04-18 11:20:28.231411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.234604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.147 [2024-04-18 11:20:28.234747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.234817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.234846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.147 [2024-04-18 11:20:28.234865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.234904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.234960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.234981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.234997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.147 [2024-04-18 11:20:28.235025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.241114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.147 [2024-04-18 11:20:28.241293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.241368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.241399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.147 [2024-04-18 11:20:28.241418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.241450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.241476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.241493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.241509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.147 [2024-04-18 11:20:28.241536] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.244699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.147 [2024-04-18 11:20:28.244829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.244897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.244926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.147 [2024-04-18 11:20:28.244945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.244974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.245058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.245080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.245097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.147 [2024-04-18 11:20:28.245144] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.251247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.147 [2024-04-18 11:20:28.251412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.251489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.251520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.147 [2024-04-18 11:20:28.251540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.147 [2024-04-18 11:20:28.251572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.147 [2024-04-18 11:20:28.251620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.147 [2024-04-18 11:20:28.251638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.147 [2024-04-18 11:20:28.251655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.147 [2024-04-18 11:20:28.251691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.147 [2024-04-18 11:20:28.254792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.147 [2024-04-18 11:20:28.254934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.255017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.147 [2024-04-18 11:20:28.255063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.148 [2024-04-18 11:20:28.255082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.148 [2024-04-18 11:20:28.255111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.148 [2024-04-18 11:20:28.255186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.148 [2024-04-18 11:20:28.255210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.148 [2024-04-18 11:20:28.255226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.148 [2024-04-18 11:20:28.255254] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.148 [2024-04-18 11:20:28.261342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:20.148 [2024-04-18 11:20:28.261497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.148 [2024-04-18 11:20:28.261579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.148 [2024-04-18 11:20:28.261642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006040 with addr=10.0.0.3, port=4420 00:29:20.148 [2024-04-18 11:20:28.261662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006040 is same with the state(5) to be set 00:29:20.148 [2024-04-18 11:20:28.261692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006040 (9): Bad file descriptor 00:29:20.148 [2024-04-18 11:20:28.261719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:20.148 [2024-04-18 11:20:28.261736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:20.148 [2024-04-18 11:20:28.261752] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:20.148 [2024-04-18 11:20:28.261778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.148 [2024-04-18 11:20:28.264910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:20.148 [2024-04-18 11:20:28.265085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.148 [2024-04-18 11:20:28.265203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.148 [2024-04-18 11:20:28.265250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:29:20.148 [2024-04-18 11:20:28.265270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:29:20.148 [2024-04-18 11:20:28.265322] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:29:20.148 [2024-04-18 11:20:28.265473] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:20.148 [2024-04-18 11:20:28.265527] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:20.148 [2024-04-18 11:20:28.265666] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:20.148 [2024-04-18 11:20:28.265774] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:29:20.148 [2024-04-18 11:20:28.265843] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:20.148 [2024-04-18 11:20:28.265876] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:20.148 [2024-04-18 11:20:28.265938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.148 [2024-04-18 11:20:28.265971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:20.148 [2024-04-18 11:20:28.266008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.148 [2024-04-18 11:20:28.266053] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.148 [2024-04-18 11:20:28.351576] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:20.148 [2024-04-18 11:20:28.351760] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:21.082 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.082 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@68 -- # sort 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@68 -- # xargs 00:29:21.082 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@64 -- # sort 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@64 -- # xargs 00:29:21.082 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.082 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.082 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:21.082 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.082 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:21.082 11:20:29 -- host/mdns_discovery.sh@72 -- # xargs 00:29:21.082 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:21.340 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.340 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@72 -- # xargs 00:29:21.340 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:21.340 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.340 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.340 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:21.340 11:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.340 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:29:21.340 11:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.340 11:20:29 -- host/mdns_discovery.sh@172 -- # sleep 1 00:29:21.340 [2024-04-18 11:20:29.489355] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:22.274 11:20:30 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:29:22.274 11:20:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:22.274 11:20:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:22.274 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.274 11:20:30 -- host/mdns_discovery.sh@80 -- # sort 00:29:22.274 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.274 11:20:30 -- host/mdns_discovery.sh@80 -- # xargs 00:29:22.274 11:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@68 -- # sort 00:29:22.532 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.532 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@68 -- # xargs 00:29:22.532 11:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.532 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.532 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@64 -- # xargs 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@64 -- # sort 00:29:22.532 11:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:22.532 11:20:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:22.533 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.533 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.533 11:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:22.533 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.533 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.533 11:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:22.533 11:20:30 -- common/autotest_common.sh@638 -- # local es=0 00:29:22.533 11:20:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:22.533 11:20:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:22.533 11:20:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:22.533 11:20:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:22.533 11:20:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:22.533 11:20:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:22.533 11:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.533 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.533 [2024-04-18 11:20:30.674674] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:29:22.533 2024/04/18 11:20:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:22.533 request: 00:29:22.533 { 00:29:22.533 "method": "bdev_nvme_start_mdns_discovery", 00:29:22.533 "params": { 00:29:22.533 "name": "mdns", 00:29:22.533 "svcname": "_nvme-disc._http", 00:29:22.533 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:22.533 } 00:29:22.533 } 00:29:22.533 Got JSON-RPC error response 00:29:22.533 GoRPCClient: error on JSON-RPC call 00:29:22.533 11:20:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:22.533 11:20:30 -- common/autotest_common.sh@641 -- # es=1 00:29:22.533 11:20:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:22.533 11:20:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:22.533 11:20:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:22.533 11:20:30 -- host/mdns_discovery.sh@183 -- # sleep 5 00:29:23.108 [2024-04-18 11:20:31.063470] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:23.108 [2024-04-18 11:20:31.163468] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:23.108 [2024-04-18 11:20:31.263497] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:23.108 [2024-04-18 11:20:31.263571] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:29:23.108 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:23.108 cookie is 0 00:29:23.108 is_local: 1 00:29:23.108 our_own: 0 00:29:23.108 wide_area: 0 00:29:23.108 multicast: 1 00:29:23.108 cached: 1 00:29:23.367 [2024-04-18 11:20:31.363465] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:23.367 [2024-04-18 11:20:31.363519] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:29:23.367 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:23.367 cookie is 0 00:29:23.367 is_local: 1 00:29:23.367 our_own: 0 00:29:23.367 wide_area: 0 00:29:23.367 multicast: 1 00:29:23.367 cached: 1 00:29:24.301 [2024-04-18 11:20:32.276562] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:24.301 [2024-04-18 11:20:32.276631] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:24.301 [2024-04-18 11:20:32.276669] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:24.301 [2024-04-18 11:20:32.362818] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:29:24.301 [2024-04-18 11:20:32.377242] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:24.301 [2024-04-18 11:20:32.377284] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:24.301 [2024-04-18 11:20:32.377352] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:24.301 [2024-04-18 11:20:32.435628] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:24.301 [2024-04-18 11:20:32.435678] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:24.301 [2024-04-18 11:20:32.464121] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:29:24.559 [2024-04-18 11:20:32.532584] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:24.559 [2024-04-18 11:20:32.532630] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@80 -- # xargs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@80 -- # sort 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # sort 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # xargs 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # sort 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # xargs 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:27.852 11:20:35 -- common/autotest_common.sh@638 -- # local es=0 00:29:27.852 11:20:35 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:27.852 11:20:35 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:27.852 11:20:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:27.852 11:20:35 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:27.852 11:20:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:27.852 11:20:35 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 [2024-04-18 11:20:35.868723] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:29:27.852 request: 00:29:27.852 { 00:29:27.852 "method": "bdev_nvme_start_mdns_discovery", 00:29:27.852 "params": { 00:29:27.852 "name": "cdc", 00:29:27.852 "svcname": "_nvme-disc._tcp", 00:29:27.852 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:27.852 } 00:29:27.852 } 00:29:27.852 Got JSON-RPC error response 00:29:27.852 GoRPCClient: error on JSON-RPC call 00:29:27.852 2024/04/18 11:20:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:27.852 11:20:35 -- common/autotest_common.sh@641 -- # es=1 00:29:27.852 11:20:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:27.852 11:20:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:27.852 11:20:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # sort 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@76 -- # xargs 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # sort 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@64 -- # xargs 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:27.852 11:20:35 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:27.852 11:20:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:27.852 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:29:27.852 11:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:27.852 11:20:36 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:29:27.852 11:20:36 -- host/mdns_discovery.sh@197 -- # kill 88808 00:29:27.852 11:20:36 -- host/mdns_discovery.sh@200 -- # wait 88808 00:29:28.110 [2024-04-18 11:20:36.259186] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:29.046 11:20:37 -- host/mdns_discovery.sh@201 -- # kill 88888 00:29:29.046 Got SIGTERM, quitting. 00:29:29.046 11:20:37 -- host/mdns_discovery.sh@202 -- # kill 88837 00:29:29.046 11:20:37 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:29:29.046 11:20:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:29.046 Got SIGTERM, quitting. 00:29:29.046 11:20:37 -- nvmf/common.sh@117 -- # sync 00:29:29.046 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:29.046 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:29.046 avahi-daemon 0.8 exiting. 00:29:29.046 11:20:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:29.046 11:20:37 -- nvmf/common.sh@120 -- # set +e 00:29:29.046 11:20:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:29.046 11:20:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:29.046 rmmod nvme_tcp 00:29:29.046 rmmod nvme_fabrics 00:29:29.046 rmmod nvme_keyring 00:29:29.046 11:20:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:29.046 11:20:37 -- nvmf/common.sh@124 -- # set -e 00:29:29.046 11:20:37 -- nvmf/common.sh@125 -- # return 0 00:29:29.046 11:20:37 -- nvmf/common.sh@478 -- # '[' -n 88752 ']' 00:29:29.046 11:20:37 -- nvmf/common.sh@479 -- # killprocess 88752 00:29:29.046 11:20:37 -- common/autotest_common.sh@936 -- # '[' -z 88752 ']' 00:29:29.046 11:20:37 -- common/autotest_common.sh@940 -- # kill -0 88752 00:29:29.046 11:20:37 -- common/autotest_common.sh@941 -- # uname 00:29:29.046 11:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:29.046 11:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88752 00:29:29.046 killing process with pid 88752 00:29:29.046 11:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:29.046 11:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:29.046 11:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88752' 00:29:29.046 11:20:37 -- common/autotest_common.sh@955 -- # kill 88752 00:29:29.046 11:20:37 -- common/autotest_common.sh@960 -- # wait 88752 00:29:30.422 11:20:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:30.422 11:20:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:30.422 11:20:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:30.422 11:20:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.422 11:20:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.422 11:20:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.422 11:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.422 11:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.422 11:20:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:30.422 ************************************ 00:29:30.422 END TEST nvmf_mdns_discovery 00:29:30.422 ************************************ 00:29:30.422 00:29:30.422 real 0m22.987s 00:29:30.422 user 0m43.508s 00:29:30.422 sys 0m2.406s 00:29:30.422 11:20:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.422 11:20:38 -- common/autotest_common.sh@10 -- # set +x 00:29:30.422 11:20:38 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:29:30.422 11:20:38 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:30.422 11:20:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:30.422 11:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.422 11:20:38 -- common/autotest_common.sh@10 -- # set +x 00:29:30.422 ************************************ 00:29:30.422 START TEST nvmf_multipath 00:29:30.422 ************************************ 00:29:30.422 11:20:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:30.422 * Looking for test storage... 00:29:30.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:30.422 11:20:38 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:30.422 11:20:38 -- nvmf/common.sh@7 -- # uname -s 00:29:30.422 11:20:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.422 11:20:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.422 11:20:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.422 11:20:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.422 11:20:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.422 11:20:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.422 11:20:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.422 11:20:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.422 11:20:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.681 11:20:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.681 11:20:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:29:30.681 11:20:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:29:30.681 11:20:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.681 11:20:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.681 11:20:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:30.681 11:20:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.681 11:20:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:30.681 11:20:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.681 11:20:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.681 11:20:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.681 11:20:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.681 11:20:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.681 11:20:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.681 11:20:38 -- paths/export.sh@5 -- # export PATH 00:29:30.681 11:20:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.681 11:20:38 -- nvmf/common.sh@47 -- # : 0 00:29:30.681 11:20:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.681 11:20:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.681 11:20:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.681 11:20:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.681 11:20:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.681 11:20:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.681 11:20:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.681 11:20:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.682 11:20:38 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.682 11:20:38 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.682 11:20:38 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:30.682 11:20:38 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:30.682 11:20:38 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:30.682 11:20:38 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:30.682 11:20:38 -- host/multipath.sh@30 -- # nvmftestinit 00:29:30.682 11:20:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:30.682 11:20:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.682 11:20:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:30.682 11:20:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:30.682 11:20:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:30.682 11:20:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.682 11:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.682 11:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.682 11:20:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:30.682 11:20:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:30.682 11:20:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:30.682 11:20:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:30.682 11:20:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:30.682 11:20:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:30.682 11:20:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.682 11:20:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.682 11:20:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:30.682 11:20:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:30.682 11:20:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:30.682 11:20:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:30.682 11:20:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:30.682 11:20:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.682 11:20:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:30.682 11:20:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:30.682 11:20:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:30.682 11:20:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:30.682 11:20:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:30.682 11:20:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:30.682 Cannot find device "nvmf_tgt_br" 00:29:30.682 11:20:38 -- nvmf/common.sh@155 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:30.682 Cannot find device "nvmf_tgt_br2" 00:29:30.682 11:20:38 -- nvmf/common.sh@156 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:30.682 11:20:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:30.682 Cannot find device "nvmf_tgt_br" 00:29:30.682 11:20:38 -- nvmf/common.sh@158 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:30.682 Cannot find device "nvmf_tgt_br2" 00:29:30.682 11:20:38 -- nvmf/common.sh@159 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:30.682 11:20:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:30.682 11:20:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:30.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.682 11:20:38 -- nvmf/common.sh@162 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:30.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.682 11:20:38 -- nvmf/common.sh@163 -- # true 00:29:30.682 11:20:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:30.682 11:20:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:30.682 11:20:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:30.682 11:20:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:30.682 11:20:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:30.682 11:20:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:30.682 11:20:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:30.682 11:20:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:30.682 11:20:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:30.682 11:20:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:30.682 11:20:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:30.682 11:20:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:30.682 11:20:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:30.682 11:20:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:30.682 11:20:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:30.941 11:20:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:30.941 11:20:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:30.941 11:20:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:30.941 11:20:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:30.941 11:20:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:30.941 11:20:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:30.941 11:20:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:30.941 11:20:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:30.941 11:20:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:30.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:29:30.941 00:29:30.941 --- 10.0.0.2 ping statistics --- 00:29:30.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.941 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:30.941 11:20:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:30.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:30.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:29:30.941 00:29:30.941 --- 10.0.0.3 ping statistics --- 00:29:30.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.941 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:29:30.941 11:20:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:30.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:29:30.941 00:29:30.941 --- 10.0.0.1 ping statistics --- 00:29:30.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.941 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:29:30.941 11:20:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.941 11:20:38 -- nvmf/common.sh@422 -- # return 0 00:29:30.941 11:20:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:30.941 11:20:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.941 11:20:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:30.941 11:20:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:30.941 11:20:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.941 11:20:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:30.941 11:20:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:30.941 11:20:39 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:30.941 11:20:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:30.941 11:20:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:30.941 11:20:39 -- common/autotest_common.sh@10 -- # set +x 00:29:30.941 11:20:39 -- nvmf/common.sh@470 -- # nvmfpid=89421 00:29:30.941 11:20:39 -- nvmf/common.sh@471 -- # waitforlisten 89421 00:29:30.941 11:20:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:30.941 11:20:39 -- common/autotest_common.sh@817 -- # '[' -z 89421 ']' 00:29:30.941 11:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.941 11:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:30.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.941 11:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.941 11:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:30.941 11:20:39 -- common/autotest_common.sh@10 -- # set +x 00:29:30.941 [2024-04-18 11:20:39.126653] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:30.941 [2024-04-18 11:20:39.126824] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.200 [2024-04-18 11:20:39.303576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:31.458 [2024-04-18 11:20:39.584716] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.458 [2024-04-18 11:20:39.584794] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.458 [2024-04-18 11:20:39.584816] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.458 [2024-04-18 11:20:39.584844] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.458 [2024-04-18 11:20:39.584863] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.458 [2024-04-18 11:20:39.585736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.458 [2024-04-18 11:20:39.585750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.028 11:20:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:32.028 11:20:40 -- common/autotest_common.sh@850 -- # return 0 00:29:32.028 11:20:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:32.028 11:20:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:32.028 11:20:40 -- common/autotest_common.sh@10 -- # set +x 00:29:32.028 11:20:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.028 11:20:40 -- host/multipath.sh@33 -- # nvmfapp_pid=89421 00:29:32.028 11:20:40 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:32.285 [2024-04-18 11:20:40.402511] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.285 11:20:40 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:32.850 Malloc0 00:29:32.850 11:20:40 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:32.850 11:20:41 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.108 11:20:41 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.366 [2024-04-18 11:20:41.484315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.366 11:20:41 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:33.624 [2024-04-18 11:20:41.716461] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:33.624 11:20:41 -- host/multipath.sh@44 -- # bdevperf_pid=89525 00:29:33.624 11:20:41 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:33.624 11:20:41 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:33.624 11:20:41 -- host/multipath.sh@47 -- # waitforlisten 89525 /var/tmp/bdevperf.sock 00:29:33.624 11:20:41 -- common/autotest_common.sh@817 -- # '[' -z 89525 ']' 00:29:33.624 11:20:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.624 11:20:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:33.624 11:20:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.624 11:20:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:33.624 11:20:41 -- common/autotest_common.sh@10 -- # set +x 00:29:34.998 11:20:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:34.998 11:20:42 -- common/autotest_common.sh@850 -- # return 0 00:29:34.998 11:20:42 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:34.998 11:20:43 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:35.255 Nvme0n1 00:29:35.255 11:20:43 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:35.819 Nvme0n1 00:29:35.819 11:20:43 -- host/multipath.sh@78 -- # sleep 1 00:29:35.819 11:20:43 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:36.751 11:20:44 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:36.751 11:20:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:37.009 11:20:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:37.267 11:20:45 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:37.267 11:20:45 -- host/multipath.sh@65 -- # dtrace_pid=89618 00:29:37.267 11:20:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:37.267 11:20:45 -- host/multipath.sh@66 -- # sleep 6 00:29:43.826 11:20:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:43.826 11:20:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:43.826 11:20:51 -- host/multipath.sh@67 -- # active_port=4421 00:29:43.826 11:20:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:43.826 Attaching 4 probes... 00:29:43.826 @path[10.0.0.2, 4421]: 12262 00:29:43.826 @path[10.0.0.2, 4421]: 11807 00:29:43.826 @path[10.0.0.2, 4421]: 11662 00:29:43.826 @path[10.0.0.2, 4421]: 12674 00:29:43.826 @path[10.0.0.2, 4421]: 12646 00:29:43.826 11:20:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:43.826 11:20:51 -- host/multipath.sh@69 -- # sed -n 1p 00:29:43.826 11:20:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:43.826 11:20:51 -- host/multipath.sh@69 -- # port=4421 00:29:43.826 11:20:51 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:43.826 11:20:51 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:43.826 11:20:51 -- host/multipath.sh@72 -- # kill 89618 00:29:43.826 11:20:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:43.826 11:20:51 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:43.826 11:20:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:43.826 11:20:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:44.084 11:20:52 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:44.084 11:20:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:44.084 11:20:52 -- host/multipath.sh@65 -- # dtrace_pid=89744 00:29:44.084 11:20:52 -- host/multipath.sh@66 -- # sleep 6 00:29:50.641 11:20:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:50.641 11:20:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:50.641 11:20:58 -- host/multipath.sh@67 -- # active_port=4420 00:29:50.641 11:20:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:50.641 Attaching 4 probes... 00:29:50.641 @path[10.0.0.2, 4420]: 11957 00:29:50.641 @path[10.0.0.2, 4420]: 12112 00:29:50.641 @path[10.0.0.2, 4420]: 12107 00:29:50.641 @path[10.0.0.2, 4420]: 11185 00:29:50.641 @path[10.0.0.2, 4420]: 12454 00:29:50.641 11:20:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:50.641 11:20:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:50.641 11:20:58 -- host/multipath.sh@69 -- # sed -n 1p 00:29:50.641 11:20:58 -- host/multipath.sh@69 -- # port=4420 00:29:50.641 11:20:58 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:50.641 11:20:58 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:50.641 11:20:58 -- host/multipath.sh@72 -- # kill 89744 00:29:50.641 11:20:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:50.641 11:20:58 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:50.641 11:20:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:50.641 11:20:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:50.898 11:20:58 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:50.898 11:20:58 -- host/multipath.sh@65 -- # dtrace_pid=89879 00:29:50.898 11:20:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:50.898 11:20:58 -- host/multipath.sh@66 -- # sleep 6 00:29:57.471 11:21:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:57.471 11:21:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:57.471 11:21:05 -- host/multipath.sh@67 -- # active_port=4421 00:29:57.471 11:21:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:57.471 Attaching 4 probes... 00:29:57.471 @path[10.0.0.2, 4421]: 9125 00:29:57.471 @path[10.0.0.2, 4421]: 12099 00:29:57.471 @path[10.0.0.2, 4421]: 12201 00:29:57.471 @path[10.0.0.2, 4421]: 12431 00:29:57.471 @path[10.0.0.2, 4421]: 12462 00:29:57.471 11:21:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:57.471 11:21:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:57.471 11:21:05 -- host/multipath.sh@69 -- # sed -n 1p 00:29:57.471 11:21:05 -- host/multipath.sh@69 -- # port=4421 00:29:57.471 11:21:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:57.471 11:21:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:57.471 11:21:05 -- host/multipath.sh@72 -- # kill 89879 00:29:57.471 11:21:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:57.471 11:21:05 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:57.471 11:21:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:57.471 11:21:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:57.729 11:21:05 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:57.729 11:21:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:57.729 11:21:05 -- host/multipath.sh@65 -- # dtrace_pid=90010 00:29:57.729 11:21:05 -- host/multipath.sh@66 -- # sleep 6 00:30:04.302 11:21:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:30:04.302 11:21:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:04.302 11:21:12 -- host/multipath.sh@67 -- # active_port= 00:30:04.302 11:21:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:04.302 Attaching 4 probes... 00:30:04.302 00:30:04.302 00:30:04.302 00:30:04.302 00:30:04.302 00:30:04.302 11:21:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:04.302 11:21:12 -- host/multipath.sh@69 -- # sed -n 1p 00:30:04.302 11:21:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:04.302 11:21:12 -- host/multipath.sh@69 -- # port= 00:30:04.302 11:21:12 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:30:04.302 11:21:12 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:30:04.302 11:21:12 -- host/multipath.sh@72 -- # kill 90010 00:30:04.302 11:21:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:04.302 11:21:12 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:30:04.302 11:21:12 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:04.302 11:21:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:04.561 11:21:12 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:30:04.561 11:21:12 -- host/multipath.sh@65 -- # dtrace_pid=90140 00:30:04.561 11:21:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:04.561 11:21:12 -- host/multipath.sh@66 -- # sleep 6 00:30:11.118 11:21:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:11.118 11:21:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:11.118 11:21:18 -- host/multipath.sh@67 -- # active_port=4421 00:30:11.118 11:21:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:11.118 Attaching 4 probes... 00:30:11.118 @path[10.0.0.2, 4421]: 11641 00:30:11.118 @path[10.0.0.2, 4421]: 12031 00:30:11.118 @path[10.0.0.2, 4421]: 11890 00:30:11.118 @path[10.0.0.2, 4421]: 11806 00:30:11.118 @path[10.0.0.2, 4421]: 11693 00:30:11.118 11:21:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:11.118 11:21:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:11.118 11:21:18 -- host/multipath.sh@69 -- # sed -n 1p 00:30:11.118 11:21:18 -- host/multipath.sh@69 -- # port=4421 00:30:11.119 11:21:18 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:11.119 11:21:18 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:11.119 11:21:18 -- host/multipath.sh@72 -- # kill 90140 00:30:11.119 11:21:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:11.119 11:21:18 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:11.119 [2024-04-18 11:21:19.140419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.140996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 [2024-04-18 11:21:19.141407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:30:11.119 11:21:19 -- host/multipath.sh@101 -- # sleep 1 00:30:12.051 11:21:20 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:30:12.051 11:21:20 -- host/multipath.sh@65 -- # dtrace_pid=90276 00:30:12.051 11:21:20 -- host/multipath.sh@66 -- # sleep 6 00:30:12.051 11:21:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:18.612 11:21:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:18.612 11:21:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:18.612 11:21:26 -- host/multipath.sh@67 -- # active_port=4420 00:30:18.612 11:21:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:18.612 Attaching 4 probes... 00:30:18.612 @path[10.0.0.2, 4420]: 10892 00:30:18.612 @path[10.0.0.2, 4420]: 11322 00:30:18.612 @path[10.0.0.2, 4420]: 11197 00:30:18.612 @path[10.0.0.2, 4420]: 11008 00:30:18.612 @path[10.0.0.2, 4420]: 9892 00:30:18.612 11:21:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:18.612 11:21:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:18.612 11:21:26 -- host/multipath.sh@69 -- # sed -n 1p 00:30:18.612 11:21:26 -- host/multipath.sh@69 -- # port=4420 00:30:18.612 11:21:26 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:18.612 11:21:26 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:18.612 11:21:26 -- host/multipath.sh@72 -- # kill 90276 00:30:18.612 11:21:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:18.612 11:21:26 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:18.870 [2024-04-18 11:21:26.859257] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.870 11:21:26 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:19.129 11:21:27 -- host/multipath.sh@111 -- # sleep 6 00:30:25.709 11:21:33 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:30:25.709 11:21:33 -- host/multipath.sh@65 -- # dtrace_pid=90469 00:30:25.709 11:21:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:25.709 11:21:33 -- host/multipath.sh@66 -- # sleep 6 00:30:32.276 11:21:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:32.276 11:21:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:32.276 11:21:39 -- host/multipath.sh@67 -- # active_port=4421 00:30:32.276 11:21:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:32.276 Attaching 4 probes... 00:30:32.276 @path[10.0.0.2, 4421]: 10725 00:30:32.276 @path[10.0.0.2, 4421]: 11780 00:30:32.276 @path[10.0.0.2, 4421]: 11712 00:30:32.276 @path[10.0.0.2, 4421]: 11780 00:30:32.276 @path[10.0.0.2, 4421]: 11842 00:30:32.276 11:21:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:32.276 11:21:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:32.276 11:21:39 -- host/multipath.sh@69 -- # sed -n 1p 00:30:32.276 11:21:39 -- host/multipath.sh@69 -- # port=4421 00:30:32.276 11:21:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:32.276 11:21:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:32.276 11:21:39 -- host/multipath.sh@72 -- # kill 90469 00:30:32.276 11:21:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:32.276 11:21:39 -- host/multipath.sh@114 -- # killprocess 89525 00:30:32.276 11:21:39 -- common/autotest_common.sh@936 -- # '[' -z 89525 ']' 00:30:32.276 11:21:39 -- common/autotest_common.sh@940 -- # kill -0 89525 00:30:32.276 11:21:39 -- common/autotest_common.sh@941 -- # uname 00:30:32.276 11:21:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:32.276 11:21:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89525 00:30:32.276 11:21:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:32.276 killing process with pid 89525 00:30:32.276 11:21:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:32.276 11:21:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89525' 00:30:32.276 11:21:39 -- common/autotest_common.sh@955 -- # kill 89525 00:30:32.276 11:21:39 -- common/autotest_common.sh@960 -- # wait 89525 00:30:32.276 Connection closed with partial response: 00:30:32.276 00:30:32.276 00:30:32.551 11:21:40 -- host/multipath.sh@116 -- # wait 89525 00:30:32.551 11:21:40 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:32.551 [2024-04-18 11:20:41.822063] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:32.551 [2024-04-18 11:20:41.822265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89525 ] 00:30:32.551 [2024-04-18 11:20:41.984647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.551 [2024-04-18 11:20:42.249405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.551 Running I/O for 90 seconds... 00:30:32.551 [2024-04-18 11:20:52.095953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.096967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.096987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.097017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.097038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.551 [2024-04-18 11:20:52.097069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.551 [2024-04-18 11:20:52.097090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.097411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.097435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.098963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.098984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.552 [2024-04-18 11:20:52.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.552 [2024-04-18 11:20:52.099677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.099729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.099781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.099856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.099915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.099967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.099998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.100950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.100981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.101368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.101389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.103928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.103976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.553 [2024-04-18 11:20:52.104464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.553 [2024-04-18 11:20:52.104497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.104519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.554 [2024-04-18 11:20:52.104934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.104967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.105977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.105997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.106029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.106051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.106082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.106118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.106162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.106194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.106252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.106978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.107015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.107060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.107085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.107135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.107159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.107191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.107212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.554 [2024-04-18 11:20:52.107243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.554 [2024-04-18 11:20:52.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.107965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.107996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.108976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.108999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.555 [2024-04-18 11:20:52.109420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.555 [2024-04-18 11:20:52.109442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.109977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.109998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.110545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.110567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.111643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.556 [2024-04-18 11:20:52.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.556 [2024-04-18 11:20:52.111727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.111759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.111793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.111815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.111847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.111869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.111900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.111922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.111953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.112951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.112982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.557 [2024-04-18 11:20:52.113570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.557 [2024-04-18 11:20:52.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.557 [2024-04-18 11:20:52.113623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.558 [2024-04-18 11:20:52.113935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.113966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.113987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.114954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.114986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.115007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.115039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.115060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.115328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.115391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.115416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.116921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.116943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.117190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.117305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.117611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.558 [2024-04-18 11:20:52.117644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.558 [2024-04-18 11:20:52.117678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.117734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.117786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.117840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.117903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.117957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.117979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.118953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.118976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.559 [2024-04-18 11:20:52.119560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.559 [2024-04-18 11:20:52.119581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.119966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.119988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.120789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.122960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.123030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.123059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.123100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.123147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.123191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.123221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.123263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.123292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.560 [2024-04-18 11:20:52.123332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.560 [2024-04-18 11:20:52.123372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.123965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.123986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.561 [2024-04-18 11:20:52.124914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.561 [2024-04-18 11:20:52.124967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.561 [2024-04-18 11:20:52.124999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.125959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.126012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.126043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.126064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.126119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.126145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.562 [2024-04-18 11:20:52.127882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.562 [2024-04-18 11:20:52.127914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.127934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.127966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.128994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.563 [2024-04-18 11:20:52.129475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.563 [2024-04-18 11:20:52.129508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.129856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.142391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.142671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.142814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.142921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.142964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.143909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.143951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.144019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.144064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.146500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.146581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.146695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.146755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.146823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.146868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.146931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.146973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.564 [2024-04-18 11:20:52.147708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.564 [2024-04-18 11:20:52.147751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.147815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.147858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.147948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.147996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.148949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.148989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.149920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.565 [2024-04-18 11:20:52.150320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.565 [2024-04-18 11:20:52.150371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.565 [2024-04-18 11:20:52.150402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.150955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.150976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.151403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.151426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.566 [2024-04-18 11:20:52.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.566 [2024-04-18 11:20:52.152882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.152903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.152933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.152954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.152986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.153971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.153993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.567 [2024-04-18 11:20:52.154628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.567 [2024-04-18 11:20:52.154649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.154961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.154991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.155878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.155899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.568 [2024-04-18 11:20:52.157680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.568 [2024-04-18 11:20:52.157703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.157755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.157860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.157911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.157962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.157993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.158966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.158995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.159048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.159099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.159168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.569 [2024-04-18 11:20:52.159219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.569 [2024-04-18 11:20:52.159271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.569 [2024-04-18 11:20:52.159322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.569 [2024-04-18 11:20:52.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.569 [2024-04-18 11:20:52.159404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.569 [2024-04-18 11:20:52.159425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.570 [2024-04-18 11:20:52.159476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.570 [2024-04-18 11:20:52.159529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.570 [2024-04-18 11:20:52.159581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.159929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.159950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.160626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.160649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:52.161838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:52.161877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.570 [2024-04-18 11:20:58.687413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.570 [2024-04-18 11:20:58.688002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.688965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.688994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.571 [2024-04-18 11:20:58.689467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.571 [2024-04-18 11:20:58.689496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.571 [2024-04-18 11:20:58.689518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.689956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.689983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.572 [2024-04-18 11:20:58.690730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.572 [2024-04-18 11:20:58.690751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.690780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.690800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.690828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.690848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.690894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.690914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.695337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.695454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.695584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.695685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.695791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.695903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.696265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.696367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.696471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.696567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.696696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.696796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.696998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.697115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.697255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.697361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.697479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.697573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.697692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.697792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.697901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.698008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.698128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.698360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.698610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.698699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.698802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.698924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.699033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.699176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.699300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.699546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.699656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.699761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.699869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.699977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.700067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.700189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.700295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.700389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.700499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.700637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.700843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.701182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.701300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.701405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.701558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.701662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.701782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.702031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.702159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.573 [2024-04-18 11:20:58.702287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.573 [2024-04-18 11:20:58.702412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.702567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.702663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.702769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.702872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.702981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.703082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.703196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.703306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.703524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.703629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.703719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.703833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.703936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.704046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.704173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.704291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.704533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.704649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.704767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.704847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.704950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.705103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.705327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.705444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.705546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.705653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.705760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.706818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.706950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.707080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.707207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.707317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.707415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.707540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.707627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.707727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.707929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.708019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.708159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.708262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:20:58.708644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:20:58.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:21:05.754857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.574 [2024-04-18 11:21:05.754963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:21:05.755054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.574 [2024-04-18 11:21:05.755084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:21:05.755135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.574 [2024-04-18 11:21:05.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:21:05.755191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.574 [2024-04-18 11:21:05.755212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.574 [2024-04-18 11:21:05.755243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.574 [2024-04-18 11:21:05.755263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.755954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.755974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.756942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.756977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.575 [2024-04-18 11:21:05.757674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.575 [2024-04-18 11:21:05.757695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.757747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.757800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.757854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.757907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.757959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.757991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.576 [2024-04-18 11:21:05.758877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.758962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.758990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.759024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.759045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.759078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.759099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.759151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.759172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.759204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.576 [2024-04-18 11:21:05.759257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.576 [2024-04-18 11:21:05.759279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.759953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.759985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.760659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.760680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.577 [2024-04-18 11:21:05.761492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.577 [2024-04-18 11:21:05.761514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.761953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.761974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.578 [2024-04-18 11:21:05.762528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.578 [2024-04-18 11:21:05.762564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.762942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.762979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.763037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.763116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.763184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.763241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:05.763299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.579 [2024-04-18 11:21:05.763320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.142314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.142421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.142479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.142516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.144979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.144999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.145016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.145036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.145054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.145084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.145127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.579 [2024-04-18 11:21:19.145151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-04-18 11:21:19.145181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.145978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.145998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-04-18 11:21:19.146338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.580 [2024-04-18 11:21:19.146359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.146858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.146901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.146963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.146981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.581 [2024-04-18 11:21:19.147592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.147637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.147703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.147742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.581 [2024-04-18 11:21:19.147791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.581 [2024-04-18 11:21:19.147812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.147830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.147862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.147901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.147919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.147940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.147966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.147988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.148974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.582 [2024-04-18 11:21:19.148992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.149013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.582 [2024-04-18 11:21:19.149045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.149074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.582 [2024-04-18 11:21:19.149093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.582 [2024-04-18 11:21:19.149152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.582 [2024-04-18 11:21:19.149186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.583 [2024-04-18 11:21:19.149225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.583 [2024-04-18 11:21:19.149263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.583 [2024-04-18 11:21:19.149302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.583 [2024-04-18 11:21:19.149341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.583 [2024-04-18 11:21:19.149380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.149965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.583 [2024-04-18 11:21:19.149984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:30:32.583 [2024-04-18 11:21:19.150045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.583 [2024-04-18 11:21:19.150074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.583 [2024-04-18 11:21:19.150090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130784 len:8 PRP1 0x0 PRP2 0x0 00:30:32.583 [2024-04-18 11:21:19.150124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150438] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007a40 was disconnected and freed. reset controller. 00:30:32.583 [2024-04-18 11:21:19.150683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.583 [2024-04-18 11:21:19.150718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.583 [2024-04-18 11:21:19.150773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.583 [2024-04-18 11:21:19.150810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.583 [2024-04-18 11:21:19.150847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.583 [2024-04-18 11:21:19.150865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006a40 is same with the state(5) to be set 00:30:32.583 [2024-04-18 11:21:19.152552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:32.583 [2024-04-18 11:21:19.152627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:30:32.583 [2024-04-18 11:21:19.152802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.583 [2024-04-18 11:21:19.152885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.583 [2024-04-18 11:21:19.152917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006a40 with addr=10.0.0.2, port=4421 00:30:32.583 [2024-04-18 11:21:19.152940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006a40 is same with the state(5) to be set 00:30:32.583 [2024-04-18 11:21:19.152977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:30:32.583 [2024-04-18 11:21:19.153035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:32.583 [2024-04-18 11:21:19.153059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:32.584 [2024-04-18 11:21:19.153078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:32.584 [2024-04-18 11:21:19.153142] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.584 [2024-04-18 11:21:19.153168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:32.584 [2024-04-18 11:21:29.264893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:32.584 Received shutdown signal, test time was about 55.675456 seconds 00:30:32.584 00:30:32.584 Latency(us) 00:30:32.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.584 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:32.584 Verification LBA range: start 0x0 length 0x4000 00:30:32.584 Nvme0n1 : 55.67 5044.66 19.71 0.00 0.00 25339.36 1392.64 7046430.72 00:30:32.584 =================================================================================================================== 00:30:32.584 Total : 5044.66 19.71 0.00 0.00 25339.36 1392.64 7046430.72 00:30:32.584 11:21:40 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.842 11:21:40 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:30:32.842 11:21:40 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:32.842 11:21:40 -- host/multipath.sh@125 -- # nvmftestfini 00:30:32.842 11:21:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:32.842 11:21:40 -- nvmf/common.sh@117 -- # sync 00:30:32.842 11:21:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:32.842 11:21:41 -- nvmf/common.sh@120 -- # set +e 00:30:32.842 11:21:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:32.842 11:21:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:32.842 rmmod nvme_tcp 00:30:32.842 rmmod nvme_fabrics 00:30:32.842 rmmod nvme_keyring 00:30:33.100 11:21:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:33.100 11:21:41 -- nvmf/common.sh@124 -- # set -e 00:30:33.100 11:21:41 -- nvmf/common.sh@125 -- # return 0 00:30:33.100 11:21:41 -- nvmf/common.sh@478 -- # '[' -n 89421 ']' 00:30:33.100 11:21:41 -- nvmf/common.sh@479 -- # killprocess 89421 00:30:33.100 11:21:41 -- common/autotest_common.sh@936 -- # '[' -z 89421 ']' 00:30:33.100 11:21:41 -- common/autotest_common.sh@940 -- # kill -0 89421 00:30:33.100 11:21:41 -- common/autotest_common.sh@941 -- # uname 00:30:33.100 11:21:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:33.100 11:21:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89421 00:30:33.100 11:21:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:33.100 killing process with pid 89421 00:30:33.100 11:21:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:33.100 11:21:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89421' 00:30:33.100 11:21:41 -- common/autotest_common.sh@955 -- # kill 89421 00:30:33.100 11:21:41 -- common/autotest_common.sh@960 -- # wait 89421 00:30:34.486 11:21:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:34.486 11:21:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:34.486 11:21:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:34.486 11:21:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:34.486 11:21:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:34.486 11:21:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.486 11:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.486 11:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.486 11:21:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:34.486 00:30:34.486 real 1m4.074s 00:30:34.486 user 3m1.253s 00:30:34.486 sys 0m12.706s 00:30:34.486 11:21:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:34.486 11:21:42 -- common/autotest_common.sh@10 -- # set +x 00:30:34.486 ************************************ 00:30:34.486 END TEST nvmf_multipath 00:30:34.486 ************************************ 00:30:34.486 11:21:42 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:34.486 11:21:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:34.486 11:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:34.486 11:21:42 -- common/autotest_common.sh@10 -- # set +x 00:30:34.744 ************************************ 00:30:34.744 START TEST nvmf_timeout 00:30:34.744 ************************************ 00:30:34.744 11:21:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:34.744 * Looking for test storage... 00:30:34.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:34.744 11:21:42 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:34.744 11:21:42 -- nvmf/common.sh@7 -- # uname -s 00:30:34.744 11:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.744 11:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.744 11:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.744 11:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.744 11:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.744 11:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.744 11:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.744 11:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.744 11:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.744 11:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:30:34.744 11:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:30:34.744 11:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.744 11:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.744 11:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:34.744 11:21:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.744 11:21:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:34.744 11:21:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.744 11:21:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.744 11:21:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.744 11:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.744 11:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.744 11:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.744 11:21:42 -- paths/export.sh@5 -- # export PATH 00:30:34.744 11:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.744 11:21:42 -- nvmf/common.sh@47 -- # : 0 00:30:34.744 11:21:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.744 11:21:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.744 11:21:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.744 11:21:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.744 11:21:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.744 11:21:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.744 11:21:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.744 11:21:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.744 11:21:42 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.744 11:21:42 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.744 11:21:42 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:34.744 11:21:42 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:34.744 11:21:42 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.744 11:21:42 -- host/timeout.sh@19 -- # nvmftestinit 00:30:34.744 11:21:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:34.744 11:21:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.744 11:21:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:34.744 11:21:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:34.744 11:21:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:34.744 11:21:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.744 11:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.744 11:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.744 11:21:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:34.744 11:21:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:34.744 11:21:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.744 11:21:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.744 11:21:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:34.744 11:21:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:34.744 11:21:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:34.744 11:21:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:34.744 11:21:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:34.744 11:21:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.744 11:21:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:34.744 11:21:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:34.744 11:21:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:34.744 11:21:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:34.744 11:21:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:34.744 11:21:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:34.744 Cannot find device "nvmf_tgt_br" 00:30:34.744 11:21:42 -- nvmf/common.sh@155 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:34.744 Cannot find device "nvmf_tgt_br2" 00:30:34.744 11:21:42 -- nvmf/common.sh@156 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:34.744 11:21:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:34.744 Cannot find device "nvmf_tgt_br" 00:30:34.744 11:21:42 -- nvmf/common.sh@158 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:34.744 Cannot find device "nvmf_tgt_br2" 00:30:34.744 11:21:42 -- nvmf/common.sh@159 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:34.744 11:21:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:34.744 11:21:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:34.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.744 11:21:42 -- nvmf/common.sh@162 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:34.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.744 11:21:42 -- nvmf/common.sh@163 -- # true 00:30:34.744 11:21:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:34.744 11:21:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:35.003 11:21:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:35.003 11:21:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:35.003 11:21:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:35.003 11:21:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:35.003 11:21:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:35.003 11:21:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:35.003 11:21:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:35.003 11:21:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:35.003 11:21:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:35.003 11:21:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:35.003 11:21:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:35.003 11:21:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:35.003 11:21:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:35.003 11:21:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:35.003 11:21:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:35.003 11:21:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:35.003 11:21:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:35.003 11:21:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:35.003 11:21:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:35.003 11:21:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:35.003 11:21:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:35.003 11:21:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:35.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:30:35.003 00:30:35.003 --- 10.0.0.2 ping statistics --- 00:30:35.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.003 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:35.003 11:21:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:35.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:35.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:30:35.003 00:30:35.003 --- 10.0.0.3 ping statistics --- 00:30:35.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.003 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:30:35.003 11:21:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:35.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:30:35.003 00:30:35.003 --- 10.0.0.1 ping statistics --- 00:30:35.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.003 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:35.003 11:21:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.003 11:21:43 -- nvmf/common.sh@422 -- # return 0 00:30:35.003 11:21:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:35.003 11:21:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.003 11:21:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:35.003 11:21:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:35.003 11:21:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.003 11:21:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:35.003 11:21:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:35.003 11:21:43 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:30:35.003 11:21:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:35.003 11:21:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:35.003 11:21:43 -- common/autotest_common.sh@10 -- # set +x 00:30:35.003 11:21:43 -- nvmf/common.sh@470 -- # nvmfpid=90811 00:30:35.003 11:21:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:35.003 11:21:43 -- nvmf/common.sh@471 -- # waitforlisten 90811 00:30:35.003 11:21:43 -- common/autotest_common.sh@817 -- # '[' -z 90811 ']' 00:30:35.003 11:21:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.003 11:21:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:35.003 11:21:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.003 11:21:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:35.003 11:21:43 -- common/autotest_common.sh@10 -- # set +x 00:30:35.261 [2024-04-18 11:21:43.290536] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:35.261 [2024-04-18 11:21:43.291336] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.261 [2024-04-18 11:21:43.480784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.829 [2024-04-18 11:21:43.764421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.829 [2024-04-18 11:21:43.764517] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.829 [2024-04-18 11:21:43.764543] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.829 [2024-04-18 11:21:43.764577] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.829 [2024-04-18 11:21:43.764612] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.829 [2024-04-18 11:21:43.764810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.829 [2024-04-18 11:21:43.764811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.087 11:21:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:36.087 11:21:44 -- common/autotest_common.sh@850 -- # return 0 00:30:36.087 11:21:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:36.087 11:21:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:36.087 11:21:44 -- common/autotest_common.sh@10 -- # set +x 00:30:36.087 11:21:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.087 11:21:44 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.087 11:21:44 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:36.346 [2024-04-18 11:21:44.462396] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.346 11:21:44 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:36.604 Malloc0 00:30:36.604 11:21:44 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.862 11:21:45 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.133 11:21:45 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.391 [2024-04-18 11:21:45.485376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.391 11:21:45 -- host/timeout.sh@32 -- # bdevperf_pid=90904 00:30:37.391 11:21:45 -- host/timeout.sh@34 -- # waitforlisten 90904 /var/tmp/bdevperf.sock 00:30:37.391 11:21:45 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:37.391 11:21:45 -- common/autotest_common.sh@817 -- # '[' -z 90904 ']' 00:30:37.391 11:21:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:37.391 11:21:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:37.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:37.391 11:21:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:37.391 11:21:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:37.391 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:30:37.650 [2024-04-18 11:21:45.614412] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:37.650 [2024-04-18 11:21:45.614610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90904 ] 00:30:37.650 [2024-04-18 11:21:45.789447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.908 [2024-04-18 11:21:46.075875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.474 11:21:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:38.474 11:21:46 -- common/autotest_common.sh@850 -- # return 0 00:30:38.474 11:21:46 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:38.732 11:21:46 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:38.991 NVMe0n1 00:30:38.991 11:21:47 -- host/timeout.sh@51 -- # rpc_pid=90950 00:30:38.991 11:21:47 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:38.991 11:21:47 -- host/timeout.sh@53 -- # sleep 1 00:30:38.991 Running I/O for 10 seconds... 00:30:39.925 11:21:48 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.188 [2024-04-18 11:21:48.276275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.188 [2024-04-18 11:21:48.276799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.276991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.277077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:30:40.189 [2024-04-18 11:21:48.279972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.189 [2024-04-18 11:21:48.280830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.189 [2024-04-18 11:21:48.280907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.189 [2024-04-18 11:21:48.280921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.280937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.280950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.280967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.280980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.280996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.281887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.281916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.281947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.281976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.281993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.282006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.282022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.282037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.282053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.282066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.282082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.190 [2024-04-18 11:21:48.282095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.282125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.282140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.190 [2024-04-18 11:21:48.282157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.190 [2024-04-18 11:21:48.282171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.191 [2024-04-18 11:21:48.282885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.282935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.282967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62632 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62640 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62648 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62656 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62664 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62672 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62680 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62688 len:8 PRP1 0x0 PRP2 0x0 00:30:40.191 [2024-04-18 11:21:48.283641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.191 [2024-04-18 11:21:48.283654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.191 [2024-04-18 11:21:48.283665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.191 [2024-04-18 11:21:48.283677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62696 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62704 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62712 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62720 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62728 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62736 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.283954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.283964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.283976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.283989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61968 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61976 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61984 len:8 PRP1 0x0 PRP2 0x0 00:30:40.192 [2024-04-18 11:21:48.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.192 [2024-04-18 11:21:48.284709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.192 [2024-04-18 11:21:48.284719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.192 [2024-04-18 11:21:48.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61992 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.284769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.284779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.284791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62000 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.284815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.284826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62008 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.284862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.284873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.284884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62016 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.284910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.284920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62024 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.284959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.284970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.284982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62032 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.284995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62040 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62048 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62056 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62064 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62072 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.285293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62080 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.285317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.285330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62088 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62104 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62112 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62120 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62128 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62136 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61816 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61824 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.193 [2024-04-18 11:21:48.286513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.193 [2024-04-18 11:21:48.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61832 len:8 PRP1 0x0 PRP2 0x0 00:30:40.193 [2024-04-18 11:21:48.286538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.193 [2024-04-18 11:21:48.286550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.286561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.286572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61840 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.286586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.286598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.286610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.286621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61848 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.286635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.286647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.286658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.286676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61856 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.286690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.286702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.286713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.286724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61864 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61872 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61880 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61888 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61904 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62144 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62152 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62160 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62168 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62176 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62184 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62192 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62200 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.296949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.296962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.296972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.296989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62208 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62216 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62224 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62232 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62240 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62248 len:8 PRP1 0x0 PRP2 0x0 00:30:40.194 [2024-04-18 11:21:48.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.194 [2024-04-18 11:21:48.297745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.194 [2024-04-18 11:21:48.297770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.194 [2024-04-18 11:21:48.297798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62256 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.297830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.297863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.297892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.297920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62264 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.297954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.297993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62272 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62280 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62288 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62296 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62304 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62312 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62320 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.298897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.298931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.298956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.298980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62328 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62336 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62344 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62352 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62360 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62368 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62376 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62384 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.299888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.299919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62392 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.299953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.299985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.300011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.300043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62400 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.300088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.300141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.300158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.300176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62408 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.300194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.300211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.300225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.300241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61912 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.300258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.300276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.300289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.300304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61920 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.300322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.300349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.300363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.300944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61928 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.300966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.300990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.301278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.195 [2024-04-18 11:21:48.301301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61936 len:8 PRP1 0x0 PRP2 0x0 00:30:40.195 [2024-04-18 11:21:48.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.195 [2024-04-18 11:21:48.301678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.195 [2024-04-18 11:21:48.301718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.301737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61944 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.301756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.302124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.302158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.302177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61952 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.302196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.302520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.302554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.302572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61960 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.302592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.302611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.302626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.302953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62416 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.302976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.302995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.303225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.303247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62424 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.303265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.303284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.303299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.303691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62432 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.303751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.303773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.303788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.303804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62440 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.303822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 11:21:48 -- host/timeout.sh@56 -- # sleep 2 00:30:40.196 [2024-04-18 11:21:48.304414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.304446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.304464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62448 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.304483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.304502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.304517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.304862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62456 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.304903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.304927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.304943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.304961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62464 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.305275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.305312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.305329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.305346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62472 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.305365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.305383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.305668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.305686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62480 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.305705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.305727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.305743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.306136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62488 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.306162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.306182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.306198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.306484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62496 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.306515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.306536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.306552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.306569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62504 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.306936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.306954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62512 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.306990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.307009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.307256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.307277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62520 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.307296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.307316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.307332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.307669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.307688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.307703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.307714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.307726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62536 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.307739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.307863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.307878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.307890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62544 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.308124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.308148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.308160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.308173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62552 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.308186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.308198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.308209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.308221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62560 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.308234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.308247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.308257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.308269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62568 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.308282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.196 [2024-04-18 11:21:48.308294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.196 [2024-04-18 11:21:48.308305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.196 [2024-04-18 11:21:48.308317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62576 len:8 PRP1 0x0 PRP2 0x0 00:30:40.196 [2024-04-18 11:21:48.308330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62584 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62592 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62600 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62608 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.197 [2024-04-18 11:21:48.308610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.197 [2024-04-18 11:21:48.308627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:30:40.197 [2024-04-18 11:21:48.308641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.308951] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:30:40.197 [2024-04-18 11:21:48.309254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.197 [2024-04-18 11:21:48.309282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.309303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.197 [2024-04-18 11:21:48.309317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.309332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.197 [2024-04-18 11:21:48.309345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.309366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.197 [2024-04-18 11:21:48.309380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.197 [2024-04-18 11:21:48.309394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:30:40.197 [2024-04-18 11:21:48.309682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:40.197 [2024-04-18 11:21:48.309735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:30:40.197 [2024-04-18 11:21:48.309899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.197 [2024-04-18 11:21:48.309974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.197 [2024-04-18 11:21:48.309999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:30:40.197 [2024-04-18 11:21:48.310015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:30:40.197 [2024-04-18 11:21:48.310050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:30:40.197 [2024-04-18 11:21:48.310078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:40.197 [2024-04-18 11:21:48.310093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:40.197 [2024-04-18 11:21:48.310125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:40.197 [2024-04-18 11:21:48.310162] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:40.197 [2024-04-18 11:21:48.310188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:42.099 11:21:50 -- host/timeout.sh@57 -- # get_controller 00:30:42.099 11:21:50 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:42.099 [2024-04-18 11:21:50.310341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.099 [2024-04-18 11:21:50.310441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.099 [2024-04-18 11:21:50.310469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:30:42.099 [2024-04-18 11:21:50.310491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:30:42.099 [2024-04-18 11:21:50.310528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:30:42.099 [2024-04-18 11:21:50.310556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:42.099 [2024-04-18 11:21:50.310572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:42.099 [2024-04-18 11:21:50.310588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:42.099 [2024-04-18 11:21:50.310631] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:42.099 11:21:50 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:42.099 [2024-04-18 11:21:50.310668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:42.358 11:21:50 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:42.358 11:21:50 -- host/timeout.sh@58 -- # get_bdev 00:30:42.358 11:21:50 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:42.358 11:21:50 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:42.925 11:21:50 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:42.925 11:21:50 -- host/timeout.sh@61 -- # sleep 5 00:30:44.299 [2024-04-18 11:21:52.310850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.299 [2024-04-18 11:21:52.310974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.299 [2024-04-18 11:21:52.311002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:30:44.299 [2024-04-18 11:21:52.311023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:30:44.299 [2024-04-18 11:21:52.311061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:30:44.299 [2024-04-18 11:21:52.311117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:44.299 [2024-04-18 11:21:52.311137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:44.299 [2024-04-18 11:21:52.311502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:44.300 [2024-04-18 11:21:52.311569] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:44.300 [2024-04-18 11:21:52.311588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.215 [2024-04-18 11:21:54.311902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.151 00:30:47.151 Latency(us) 00:30:47.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.151 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:47.151 Verification LBA range: start 0x0 length 0x4000 00:30:47.151 NVMe0n1 : 8.20 942.32 3.68 15.61 0.00 133490.00 3202.33 7046430.72 00:30:47.151 =================================================================================================================== 00:30:47.151 Total : 942.32 3.68 15.61 0.00 133490.00 3202.33 7046430.72 00:30:47.151 0 00:30:47.717 11:21:55 -- host/timeout.sh@62 -- # get_controller 00:30:47.717 11:21:55 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.717 11:21:55 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:47.974 11:21:56 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:47.974 11:21:56 -- host/timeout.sh@63 -- # get_bdev 00:30:47.974 11:21:56 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:47.974 11:21:56 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:48.233 11:21:56 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:48.233 11:21:56 -- host/timeout.sh@65 -- # wait 90950 00:30:48.233 11:21:56 -- host/timeout.sh@67 -- # killprocess 90904 00:30:48.233 11:21:56 -- common/autotest_common.sh@936 -- # '[' -z 90904 ']' 00:30:48.233 11:21:56 -- common/autotest_common.sh@940 -- # kill -0 90904 00:30:48.233 11:21:56 -- common/autotest_common.sh@941 -- # uname 00:30:48.233 11:21:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:48.233 11:21:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90904 00:30:48.233 11:21:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:48.233 11:21:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:48.233 killing process with pid 90904 00:30:48.233 11:21:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90904' 00:30:48.233 11:21:56 -- common/autotest_common.sh@955 -- # kill 90904 00:30:48.233 11:21:56 -- common/autotest_common.sh@960 -- # wait 90904 00:30:48.233 Received shutdown signal, test time was about 9.335458 seconds 00:30:48.233 00:30:48.233 Latency(us) 00:30:48.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.233 =================================================================================================================== 00:30:48.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:49.636 11:21:57 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.895 [2024-04-18 11:21:57.904081] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:49.895 11:21:57 -- host/timeout.sh@74 -- # bdevperf_pid=91116 00:30:49.895 11:21:57 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:49.895 11:21:57 -- host/timeout.sh@76 -- # waitforlisten 91116 /var/tmp/bdevperf.sock 00:30:49.895 11:21:57 -- common/autotest_common.sh@817 -- # '[' -z 91116 ']' 00:30:49.895 11:21:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:49.895 11:21:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:49.895 11:21:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:49.895 11:21:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:49.895 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:30:49.895 [2024-04-18 11:21:58.012816] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:49.895 [2024-04-18 11:21:58.012983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91116 ] 00:30:50.153 [2024-04-18 11:21:58.181338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.411 [2024-04-18 11:21:58.446528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.975 11:21:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:50.975 11:21:59 -- common/autotest_common.sh@850 -- # return 0 00:30:50.975 11:21:59 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:51.232 11:21:59 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:51.491 NVMe0n1 00:30:51.491 11:21:59 -- host/timeout.sh@84 -- # rpc_pid=91164 00:30:51.491 11:21:59 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:51.491 11:21:59 -- host/timeout.sh@86 -- # sleep 1 00:30:51.491 Running I/O for 10 seconds... 00:30:52.424 11:22:00 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.686 [2024-04-18 11:22:00.832006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.832986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.686 [2024-04-18 11:22:00.833089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.833442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:30:52.687 [2024-04-18 11:22:00.834312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.834416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.834528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.834563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.834875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.834906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.834920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.835970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.835984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.836789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.687 [2024-04-18 11:22:00.837934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.687 [2024-04-18 11:22:00.837950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.837963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.837979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.837993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.838975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.838990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.839328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.839354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.839385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.839523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.839669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.839796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.839821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.840655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.840671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.841974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.841988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.688 [2024-04-18 11:22:00.842520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.688 [2024-04-18 11:22:00.842620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.842656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.842672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.842942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.842959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.842972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.842989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.843782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.843936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.845853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.845866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.846127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.689 [2024-04-18 11:22:00.846168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.846936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.846950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.847714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.689 [2024-04-18 11:22:00.847979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.689 [2024-04-18 11:22:00.848260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.848939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.848952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:52.690 [2024-04-18 11:22:00.849767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.849783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.690 [2024-04-18 11:22:00.850844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.850860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006e40 is same with the state(5) to be set 00:30:52.690 [2024-04-18 11:22:00.850880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:52.690 [2024-04-18 11:22:00.850893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:52.690 [2024-04-18 11:22:00.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58392 len:8 PRP1 0x0 PRP2 0x0 00:30:52.690 [2024-04-18 11:22:00.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.851862] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000006e40 was disconnected and freed. reset controller. 00:30:52.690 [2024-04-18 11:22:00.852201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.690 [2024-04-18 11:22:00.852250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.852271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.690 [2024-04-18 11:22:00.852285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.852301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.690 [2024-04-18 11:22:00.852313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.852327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.690 [2024-04-18 11:22:00.852340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.690 [2024-04-18 11:22:00.852430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:30:52.690 [2024-04-18 11:22:00.852904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.690 [2024-04-18 11:22:00.852964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:30:52.690 [2024-04-18 11:22:00.853302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.690 [2024-04-18 11:22:00.853391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.690 [2024-04-18 11:22:00.853671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:30:52.690 [2024-04-18 11:22:00.853710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:30:52.690 [2024-04-18 11:22:00.853745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:30:52.690 [2024-04-18 11:22:00.853773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:52.690 [2024-04-18 11:22:00.853788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:52.690 [2024-04-18 11:22:00.854322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.690 [2024-04-18 11:22:00.854388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.690 [2024-04-18 11:22:00.854410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.690 11:22:00 -- host/timeout.sh@90 -- # sleep 1 00:30:54.064 [2024-04-18 11:22:01.854822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.064 [2024-04-18 11:22:01.854957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.064 [2024-04-18 11:22:01.854986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:30:54.064 [2024-04-18 11:22:01.855007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:30:54.064 [2024-04-18 11:22:01.855048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:30:54.064 [2024-04-18 11:22:01.855077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:54.064 [2024-04-18 11:22:01.855093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:54.064 [2024-04-18 11:22:01.855123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.064 [2024-04-18 11:22:01.855175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.065 [2024-04-18 11:22:01.855196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.065 11:22:01 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.065 [2024-04-18 11:22:02.125280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.065 11:22:02 -- host/timeout.sh@92 -- # wait 91164 00:30:55.003 [2024-04-18 11:22:02.867187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.599 00:31:01.599 Latency(us) 00:31:01.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.599 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:01.599 Verification LBA range: start 0x0 length 0x4000 00:31:01.599 NVMe0n1 : 10.01 4724.43 18.45 0.00 0.00 27050.32 2636.33 3050402.91 00:31:01.599 =================================================================================================================== 00:31:01.599 Total : 4724.43 18.45 0.00 0.00 27050.32 2636.33 3050402.91 00:31:01.599 0 00:31:01.599 11:22:09 -- host/timeout.sh@97 -- # rpc_pid=91281 00:31:01.599 11:22:09 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.599 11:22:09 -- host/timeout.sh@98 -- # sleep 1 00:31:01.857 Running I/O for 10 seconds... 00:31:02.795 11:22:10 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.795 [2024-04-18 11:22:10.992004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.795 [2024-04-18 11:22:10.992230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.992747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:02.796 [2024-04-18 11:22:10.993618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.993790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.993844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.993900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.993954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.993978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.796 [2024-04-18 11:22:10.994740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.796 [2024-04-18 11:22:10.994764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.994817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.994846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.994871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.994899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.994924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.994952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.994976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.995962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.995991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.996469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.996493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.797 [2024-04-18 11:22:10.997866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.797 [2024-04-18 11:22:10.997894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.997919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.997975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.998964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.998989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:10.999959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:10.999988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:11.000012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:11.000041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:11.000065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.798 [2024-04-18 11:22:11.000093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.798 [2024-04-18 11:22:11.000137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.799 [2024-04-18 11:22:11.000897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.000950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.000980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.799 [2024-04-18 11:22:11.001743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.001769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009c40 is same with the state(5) to be set 00:31:02.799 [2024-04-18 11:22:11.001804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.799 [2024-04-18 11:22:11.001827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.799 [2024-04-18 11:22:11.001849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62600 len:8 PRP1 0x0 PRP2 0x0 00:31:02.799 [2024-04-18 11:22:11.001874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.002299] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009c40 was disconnected and freed. reset controller. 00:31:02.799 [2024-04-18 11:22:11.002501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.799 [2024-04-18 11:22:11.002537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.002567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.799 [2024-04-18 11:22:11.002591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.799 [2024-04-18 11:22:11.002624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.800 [2024-04-18 11:22:11.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.800 [2024-04-18 11:22:11.002675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.800 [2024-04-18 11:22:11.002706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.800 [2024-04-18 11:22:11.002729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:31:02.800 [2024-04-18 11:22:11.003082] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:02.800 [2024-04-18 11:22:11.003158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:31:02.800 [2024-04-18 11:22:11.003359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.800 [2024-04-18 11:22:11.003457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.800 [2024-04-18 11:22:11.003496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:31:02.800 [2024-04-18 11:22:11.003524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:31:02.800 [2024-04-18 11:22:11.003572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:31:02.800 [2024-04-18 11:22:11.003615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:02.800 [2024-04-18 11:22:11.003640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:02.800 [2024-04-18 11:22:11.003664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.800 [2024-04-18 11:22:11.003719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.800 [2024-04-18 11:22:11.003749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:03.058 11:22:11 -- host/timeout.sh@101 -- # sleep 3 00:31:03.994 [2024-04-18 11:22:12.003983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.994 [2024-04-18 11:22:12.004153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.994 [2024-04-18 11:22:12.004194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:31:03.994 [2024-04-18 11:22:12.004216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:31:03.994 [2024-04-18 11:22:12.004260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:31:03.994 [2024-04-18 11:22:12.004289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:03.994 [2024-04-18 11:22:12.004305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:03.994 [2024-04-18 11:22:12.004321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:03.994 [2024-04-18 11:22:12.004366] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:03.994 [2024-04-18 11:22:12.004385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.972 [2024-04-18 11:22:13.004559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.972 [2024-04-18 11:22:13.004695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.972 [2024-04-18 11:22:13.004723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:31:04.972 [2024-04-18 11:22:13.004745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:31:04.972 [2024-04-18 11:22:13.004795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:31:04.972 [2024-04-18 11:22:13.004841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:04.972 [2024-04-18 11:22:13.004859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:04.972 [2024-04-18 11:22:13.004876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.972 [2024-04-18 11:22:13.004920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.972 [2024-04-18 11:22:13.004939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.905 [2024-04-18 11:22:14.008373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.905 [2024-04-18 11:22:14.008496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.905 [2024-04-18 11:22:14.008533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:31:05.905 [2024-04-18 11:22:14.008554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:31:05.905 [2024-04-18 11:22:14.008841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:31:05.905 [2024-04-18 11:22:14.009145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:05.905 [2024-04-18 11:22:14.009171] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:05.905 [2024-04-18 11:22:14.009187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:05.905 [2024-04-18 11:22:14.013457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.905 [2024-04-18 11:22:14.013499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.905 11:22:14 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.163 [2024-04-18 11:22:14.300655] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.163 11:22:14 -- host/timeout.sh@103 -- # wait 91281 00:31:07.097 [2024-04-18 11:22:15.052283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.366 00:31:12.366 Latency(us) 00:31:12.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.366 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:12.366 Verification LBA range: start 0x0 length 0x4000 00:31:12.366 NVMe0n1 : 10.01 4049.83 15.82 3526.05 0.00 16851.22 1050.07 3035150.89 00:31:12.366 =================================================================================================================== 00:31:12.366 Total : 4049.83 15.82 3526.05 0.00 16851.22 0.00 3035150.89 00:31:12.366 0 00:31:12.366 11:22:19 -- host/timeout.sh@105 -- # killprocess 91116 00:31:12.366 11:22:19 -- common/autotest_common.sh@936 -- # '[' -z 91116 ']' 00:31:12.366 11:22:19 -- common/autotest_common.sh@940 -- # kill -0 91116 00:31:12.366 11:22:19 -- common/autotest_common.sh@941 -- # uname 00:31:12.366 11:22:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:12.366 11:22:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91116 00:31:12.366 killing process with pid 91116 00:31:12.366 Received shutdown signal, test time was about 10.000000 seconds 00:31:12.366 00:31:12.366 Latency(us) 00:31:12.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.366 =================================================================================================================== 00:31:12.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.366 11:22:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:31:12.366 11:22:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:31:12.366 11:22:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91116' 00:31:12.366 11:22:19 -- common/autotest_common.sh@955 -- # kill 91116 00:31:12.366 11:22:19 -- common/autotest_common.sh@960 -- # wait 91116 00:31:12.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:12.935 11:22:20 -- host/timeout.sh@110 -- # bdevperf_pid=91414 00:31:12.935 11:22:20 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:31:12.935 11:22:20 -- host/timeout.sh@112 -- # waitforlisten 91414 /var/tmp/bdevperf.sock 00:31:12.935 11:22:20 -- common/autotest_common.sh@817 -- # '[' -z 91414 ']' 00:31:12.935 11:22:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:12.935 11:22:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:12.935 11:22:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:12.935 11:22:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:12.935 11:22:20 -- common/autotest_common.sh@10 -- # set +x 00:31:12.935 [2024-04-18 11:22:21.061732] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:12.935 [2024-04-18 11:22:21.061891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91414 ] 00:31:13.193 [2024-04-18 11:22:21.223609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.452 [2024-04-18 11:22:21.486167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.018 11:22:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:14.018 11:22:21 -- common/autotest_common.sh@850 -- # return 0 00:31:14.018 11:22:21 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91414 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:31:14.018 11:22:21 -- host/timeout.sh@116 -- # dtrace_pid=91442 00:31:14.018 11:22:21 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:31:14.018 11:22:22 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:31:14.276 NVMe0n1 00:31:14.276 11:22:22 -- host/timeout.sh@124 -- # rpc_pid=91494 00:31:14.276 11:22:22 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:14.276 11:22:22 -- host/timeout.sh@125 -- # sleep 1 00:31:14.534 Running I/O for 10 seconds... 00:31:15.472 11:22:23 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.733 [2024-04-18 11:22:23.750695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.750998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.734 [2024-04-18 11:22:23.751795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.751989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.752347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:31:15.735 [2024-04-18 11:22:23.753190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.735 [2024-04-18 11:22:23.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.735 [2024-04-18 11:22:23.753798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.753997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.754436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.754450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.736 [2024-04-18 11:22:23.755864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.736 [2024-04-18 11:22:23.755880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.755894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.755911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.755925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.755941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.755955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.755972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.755985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.756975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.756992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.757022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.757063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.757094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.737 [2024-04-18 11:22:23.757171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.737 [2024-04-18 11:22:23.757186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.757981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.757998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.758013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.758029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.758043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.758059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.758073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.758090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.738 [2024-04-18 11:22:23.758115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.758134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:31:15.738 [2024-04-18 11:22:23.758155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:15.738 [2024-04-18 11:22:23.758168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:15.738 [2024-04-18 11:22:23.758182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115472 len:8 PRP1 0x0 PRP2 0x0 00:31:15.738 [2024-04-18 11:22:23.758202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.738 [2024-04-18 11:22:23.758468] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:31:15.739 [2024-04-18 11:22:23.758591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.739 [2024-04-18 11:22:23.758620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.739 [2024-04-18 11:22:23.758638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.739 [2024-04-18 11:22:23.758652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.739 [2024-04-18 11:22:23.758667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.739 [2024-04-18 11:22:23.758681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.739 [2024-04-18 11:22:23.758696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.739 [2024-04-18 11:22:23.758710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.739 [2024-04-18 11:22:23.758723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:31:15.739 [2024-04-18 11:22:23.759017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.739 [2024-04-18 11:22:23.759060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:31:15.739 [2024-04-18 11:22:23.759219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.739 [2024-04-18 11:22:23.759285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.739 [2024-04-18 11:22:23.759317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:31:15.739 [2024-04-18 11:22:23.759334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:31:15.739 [2024-04-18 11:22:23.759363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:31:15.739 [2024-04-18 11:22:23.759390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.739 [2024-04-18 11:22:23.759406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.739 [2024-04-18 11:22:23.759426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.739 [2024-04-18 11:22:23.759457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.739 [2024-04-18 11:22:23.759475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.739 11:22:23 -- host/timeout.sh@128 -- # wait 91494 00:31:17.641 [2024-04-18 11:22:25.759783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.641 [2024-04-18 11:22:25.759899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.641 [2024-04-18 11:22:25.759926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:31:17.641 [2024-04-18 11:22:25.759947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:31:17.641 [2024-04-18 11:22:25.759986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:31:17.641 [2024-04-18 11:22:25.760014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.641 [2024-04-18 11:22:25.760030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.641 [2024-04-18 11:22:25.760045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.641 [2024-04-18 11:22:25.760086] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.641 [2024-04-18 11:22:25.760123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.609 [2024-04-18 11:22:27.760812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.609 [2024-04-18 11:22:27.760945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.609 [2024-04-18 11:22:27.760973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:31:19.609 [2024-04-18 11:22:27.761004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:31:19.609 [2024-04-18 11:22:27.761059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:31:19.609 [2024-04-18 11:22:27.761091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:19.609 [2024-04-18 11:22:27.761107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:19.609 [2024-04-18 11:22:27.761136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.609 [2024-04-18 11:22:27.761181] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:19.609 [2024-04-18 11:22:27.761199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:22.139 [2024-04-18 11:22:29.761337] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:22.704 00:31:22.704 Latency(us) 00:31:22.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.704 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:31:22.704 NVMe0n1 : 8.14 1852.10 7.23 15.72 0.00 68463.43 3172.54 7046430.72 00:31:22.704 =================================================================================================================== 00:31:22.704 Total : 1852.10 7.23 15.72 0.00 68463.43 3172.54 7046430.72 00:31:22.704 0 00:31:22.704 11:22:30 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:22.704 Attaching 5 probes... 00:31:22.704 1272.998452: reset bdev controller NVMe0 00:31:22.704 1273.112406: reconnect bdev controller NVMe0 00:31:22.704 3273.541462: reconnect delay bdev controller NVMe0 00:31:22.704 3273.610589: reconnect bdev controller NVMe0 00:31:22.704 5274.587873: reconnect delay bdev controller NVMe0 00:31:22.704 5274.631659: reconnect bdev controller NVMe0 00:31:22.704 7275.244575: reconnect delay bdev controller NVMe0 00:31:22.704 7275.289081: reconnect bdev controller NVMe0 00:31:22.704 11:22:30 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:31:22.704 11:22:30 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:31:22.704 11:22:30 -- host/timeout.sh@136 -- # kill 91442 00:31:22.704 11:22:30 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:22.704 11:22:30 -- host/timeout.sh@139 -- # killprocess 91414 00:31:22.704 11:22:30 -- common/autotest_common.sh@936 -- # '[' -z 91414 ']' 00:31:22.704 11:22:30 -- common/autotest_common.sh@940 -- # kill -0 91414 00:31:22.704 11:22:30 -- common/autotest_common.sh@941 -- # uname 00:31:22.704 11:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:22.704 11:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91414 00:31:22.704 11:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:31:22.704 11:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:31:22.704 killing process with pid 91414 00:31:22.704 11:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91414' 00:31:22.704 11:22:30 -- common/autotest_common.sh@955 -- # kill 91414 00:31:22.704 Received shutdown signal, test time was about 8.209688 seconds 00:31:22.704 00:31:22.704 Latency(us) 00:31:22.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.704 =================================================================================================================== 00:31:22.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:22.704 11:22:30 -- common/autotest_common.sh@960 -- # wait 91414 00:31:24.079 11:22:32 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.337 11:22:32 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:31:24.337 11:22:32 -- host/timeout.sh@145 -- # nvmftestfini 00:31:24.337 11:22:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:24.337 11:22:32 -- nvmf/common.sh@117 -- # sync 00:31:24.337 11:22:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:24.337 11:22:32 -- nvmf/common.sh@120 -- # set +e 00:31:24.337 11:22:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.337 11:22:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:24.337 rmmod nvme_tcp 00:31:24.337 rmmod nvme_fabrics 00:31:24.337 rmmod nvme_keyring 00:31:24.337 11:22:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.337 11:22:32 -- nvmf/common.sh@124 -- # set -e 00:31:24.337 11:22:32 -- nvmf/common.sh@125 -- # return 0 00:31:24.337 11:22:32 -- nvmf/common.sh@478 -- # '[' -n 90811 ']' 00:31:24.337 11:22:32 -- nvmf/common.sh@479 -- # killprocess 90811 00:31:24.337 11:22:32 -- common/autotest_common.sh@936 -- # '[' -z 90811 ']' 00:31:24.337 11:22:32 -- common/autotest_common.sh@940 -- # kill -0 90811 00:31:24.337 11:22:32 -- common/autotest_common.sh@941 -- # uname 00:31:24.337 11:22:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.337 11:22:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90811 00:31:24.337 11:22:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:24.337 killing process with pid 90811 00:31:24.337 11:22:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:24.337 11:22:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90811' 00:31:24.337 11:22:32 -- common/autotest_common.sh@955 -- # kill 90811 00:31:24.337 11:22:32 -- common/autotest_common.sh@960 -- # wait 90811 00:31:25.721 11:22:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:25.721 11:22:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:25.721 11:22:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:25.721 11:22:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.721 11:22:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.721 11:22:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.721 11:22:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.721 11:22:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.721 11:22:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:25.721 00:31:25.721 real 0m51.058s 00:31:25.721 user 2m28.674s 00:31:25.721 sys 0m5.070s 00:31:25.721 11:22:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.721 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:31:25.721 ************************************ 00:31:25.721 END TEST nvmf_timeout 00:31:25.721 ************************************ 00:31:25.721 11:22:33 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:31:25.721 11:22:33 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:31:25.721 11:22:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:25.721 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:31:25.721 11:22:33 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:31:25.721 00:31:25.721 real 13m53.445s 00:31:25.721 user 36m13.382s 00:31:25.721 sys 2m50.544s 00:31:25.721 11:22:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.721 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:31:25.721 ************************************ 00:31:25.721 END TEST nvmf_tcp 00:31:25.721 ************************************ 00:31:25.721 11:22:33 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:31:25.721 11:22:33 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:25.721 11:22:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:25.721 11:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.721 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:31:25.980 ************************************ 00:31:25.980 START TEST spdkcli_nvmf_tcp 00:31:25.980 ************************************ 00:31:25.980 11:22:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:25.980 * Looking for test storage... 00:31:25.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:25.980 11:22:34 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:25.980 11:22:34 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:25.980 11:22:34 -- nvmf/common.sh@7 -- # uname -s 00:31:25.980 11:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.980 11:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.980 11:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.980 11:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.980 11:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.980 11:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.980 11:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.980 11:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.980 11:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.980 11:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.980 11:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:25.980 11:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:25.980 11:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.980 11:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.980 11:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:25.980 11:22:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.980 11:22:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:25.980 11:22:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.980 11:22:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.980 11:22:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.980 11:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.980 11:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.980 11:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.980 11:22:34 -- paths/export.sh@5 -- # export PATH 00:31:25.980 11:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.980 11:22:34 -- nvmf/common.sh@47 -- # : 0 00:31:25.980 11:22:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.980 11:22:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.980 11:22:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.980 11:22:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.980 11:22:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.980 11:22:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.980 11:22:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.980 11:22:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:25.980 11:22:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:25.980 11:22:34 -- common/autotest_common.sh@10 -- # set +x 00:31:25.980 11:22:34 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:25.980 11:22:34 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=91739 00:31:25.980 11:22:34 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:25.980 11:22:34 -- spdkcli/common.sh@34 -- # waitforlisten 91739 00:31:25.980 11:22:34 -- common/autotest_common.sh@817 -- # '[' -z 91739 ']' 00:31:25.980 11:22:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.980 11:22:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.980 11:22:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.980 11:22:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:25.980 11:22:34 -- common/autotest_common.sh@10 -- # set +x 00:31:26.239 [2024-04-18 11:22:34.254849] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:26.239 [2024-04-18 11:22:34.255087] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91739 ] 00:31:26.239 [2024-04-18 11:22:34.441439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:26.496 [2024-04-18 11:22:34.682891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.497 [2024-04-18 11:22:34.682908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.061 11:22:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:27.061 11:22:35 -- common/autotest_common.sh@850 -- # return 0 00:31:27.061 11:22:35 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:27.061 11:22:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:27.061 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:31:27.061 11:22:35 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:27.061 11:22:35 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:27.061 11:22:35 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:27.062 11:22:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:27.062 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:31:27.062 11:22:35 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:27.062 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:27.062 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:27.062 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:27.062 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:27.062 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:27.062 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:27.062 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:27.062 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:27.062 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:27.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:27.062 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:27.062 ' 00:31:27.628 [2024-04-18 11:22:35.668517] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:30.161 [2024-04-18 11:22:38.044367] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.108 [2024-04-18 11:22:39.318896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:33.642 [2024-04-18 11:22:41.669046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:35.544 [2024-04-18 11:22:43.703028] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:37.448 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:37.448 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:37.448 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:37.448 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:37.448 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:37.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:37.448 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:37.448 11:22:45 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:37.448 11:22:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:37.448 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:31:37.448 11:22:45 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:37.448 11:22:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:37.448 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:31:37.448 11:22:45 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:37.448 11:22:45 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:31:37.718 11:22:45 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:37.718 11:22:45 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:37.718 11:22:45 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:37.718 11:22:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:37.718 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:31:37.976 11:22:45 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:37.976 11:22:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:37.976 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:31:37.976 11:22:45 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:37.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:37.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:37.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:37.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:37.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:37.976 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:37.976 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:37.976 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:37.976 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:37.976 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:37.976 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:37.976 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:37.977 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:37.977 ' 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:44.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:44.538 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:44.538 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:44.538 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:44.538 11:22:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:44.538 11:22:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:44.538 11:22:51 -- common/autotest_common.sh@10 -- # set +x 00:31:44.538 11:22:51 -- spdkcli/nvmf.sh@90 -- # killprocess 91739 00:31:44.538 11:22:51 -- common/autotest_common.sh@936 -- # '[' -z 91739 ']' 00:31:44.538 11:22:51 -- common/autotest_common.sh@940 -- # kill -0 91739 00:31:44.538 11:22:51 -- common/autotest_common.sh@941 -- # uname 00:31:44.538 11:22:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:44.538 11:22:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91739 00:31:44.538 11:22:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:44.538 11:22:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:44.538 killing process with pid 91739 00:31:44.538 11:22:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91739' 00:31:44.538 11:22:51 -- common/autotest_common.sh@955 -- # kill 91739 00:31:44.538 [2024-04-18 11:22:51.963654] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:44.538 11:22:51 -- common/autotest_common.sh@960 -- # wait 91739 00:31:45.104 11:22:53 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:45.104 11:22:53 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:45.104 11:22:53 -- spdkcli/common.sh@13 -- # '[' -n 91739 ']' 00:31:45.104 11:22:53 -- spdkcli/common.sh@14 -- # killprocess 91739 00:31:45.104 11:22:53 -- common/autotest_common.sh@936 -- # '[' -z 91739 ']' 00:31:45.104 11:22:53 -- common/autotest_common.sh@940 -- # kill -0 91739 00:31:45.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91739) - No such process 00:31:45.104 Process with pid 91739 is not found 00:31:45.104 11:22:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91739 is not found' 00:31:45.104 11:22:53 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:45.104 11:22:53 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:45.104 11:22:53 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:45.104 00:31:45.104 real 0m19.210s 00:31:45.104 user 0m40.190s 00:31:45.104 sys 0m1.279s 00:31:45.104 11:22:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:45.104 11:22:53 -- common/autotest_common.sh@10 -- # set +x 00:31:45.104 ************************************ 00:31:45.104 END TEST spdkcli_nvmf_tcp 00:31:45.104 ************************************ 00:31:45.104 11:22:53 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:45.104 11:22:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:45.104 11:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:45.104 11:22:53 -- common/autotest_common.sh@10 -- # set +x 00:31:45.364 ************************************ 00:31:45.364 START TEST nvmf_identify_passthru 00:31:45.364 ************************************ 00:31:45.364 11:22:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:45.364 * Looking for test storage... 00:31:45.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:45.364 11:22:53 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:45.364 11:22:53 -- nvmf/common.sh@7 -- # uname -s 00:31:45.364 11:22:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.364 11:22:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.364 11:22:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.364 11:22:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.364 11:22:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.364 11:22:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.364 11:22:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.364 11:22:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.364 11:22:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.364 11:22:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.364 11:22:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:45.364 11:22:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:45.364 11:22:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.364 11:22:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.364 11:22:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:45.364 11:22:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.364 11:22:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:45.364 11:22:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.364 11:22:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.364 11:22:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.364 11:22:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.364 11:22:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.364 11:22:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.364 11:22:53 -- paths/export.sh@5 -- # export PATH 00:31:45.365 11:22:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.365 11:22:53 -- nvmf/common.sh@47 -- # : 0 00:31:45.365 11:22:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.365 11:22:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.365 11:22:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.365 11:22:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.365 11:22:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.365 11:22:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.365 11:22:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.365 11:22:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.365 11:22:53 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:45.365 11:22:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.365 11:22:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.365 11:22:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.365 11:22:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.365 11:22:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.365 11:22:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.365 11:22:53 -- paths/export.sh@5 -- # export PATH 00:31:45.365 11:22:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.365 11:22:53 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:45.365 11:22:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:45.365 11:22:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.365 11:22:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:45.365 11:22:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:45.365 11:22:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:45.365 11:22:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.365 11:22:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:45.365 11:22:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.365 11:22:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:45.365 11:22:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:45.365 11:22:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:45.365 11:22:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:45.365 11:22:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:45.365 11:22:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:45.365 11:22:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.365 11:22:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.365 11:22:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:45.365 11:22:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:45.365 11:22:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:45.365 11:22:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:45.365 11:22:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:45.365 11:22:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.365 11:22:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:45.365 11:22:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:45.365 11:22:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:45.365 11:22:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:45.365 11:22:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:45.365 11:22:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:45.365 Cannot find device "nvmf_tgt_br" 00:31:45.365 11:22:53 -- nvmf/common.sh@155 -- # true 00:31:45.365 11:22:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:45.365 Cannot find device "nvmf_tgt_br2" 00:31:45.365 11:22:53 -- nvmf/common.sh@156 -- # true 00:31:45.365 11:22:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:45.365 11:22:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:45.365 Cannot find device "nvmf_tgt_br" 00:31:45.365 11:22:53 -- nvmf/common.sh@158 -- # true 00:31:45.365 11:22:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:45.365 Cannot find device "nvmf_tgt_br2" 00:31:45.365 11:22:53 -- nvmf/common.sh@159 -- # true 00:31:45.365 11:22:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:45.365 11:22:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:45.624 11:22:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:45.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:45.624 11:22:53 -- nvmf/common.sh@162 -- # true 00:31:45.624 11:22:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:45.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:45.624 11:22:53 -- nvmf/common.sh@163 -- # true 00:31:45.624 11:22:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:45.624 11:22:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:45.624 11:22:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:45.624 11:22:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:45.624 11:22:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:45.624 11:22:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:45.624 11:22:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:45.624 11:22:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:45.624 11:22:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:45.624 11:22:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:45.624 11:22:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:45.624 11:22:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:45.624 11:22:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:45.624 11:22:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:45.624 11:22:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:45.624 11:22:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:45.624 11:22:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:45.624 11:22:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:45.624 11:22:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:45.624 11:22:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:45.624 11:22:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:45.624 11:22:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:45.624 11:22:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:45.624 11:22:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:45.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:31:45.624 00:31:45.624 --- 10.0.0.2 ping statistics --- 00:31:45.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.624 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:31:45.624 11:22:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:45.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:45.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:31:45.624 00:31:45.624 --- 10.0.0.3 ping statistics --- 00:31:45.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.624 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:31:45.624 11:22:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:45.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:31:45.624 00:31:45.624 --- 10.0.0.1 ping statistics --- 00:31:45.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.624 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:31:45.624 11:22:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.624 11:22:53 -- nvmf/common.sh@422 -- # return 0 00:31:45.624 11:22:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:45.624 11:22:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.624 11:22:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:45.624 11:22:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:45.624 11:22:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.624 11:22:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:45.624 11:22:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:45.624 11:22:53 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:45.624 11:22:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:45.624 11:22:53 -- common/autotest_common.sh@10 -- # set +x 00:31:45.624 11:22:53 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:45.624 11:22:53 -- common/autotest_common.sh@1510 -- # bdfs=() 00:31:45.624 11:22:53 -- common/autotest_common.sh@1510 -- # local bdfs 00:31:45.624 11:22:53 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:31:45.624 11:22:53 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:31:45.624 11:22:53 -- common/autotest_common.sh@1499 -- # bdfs=() 00:31:45.624 11:22:53 -- common/autotest_common.sh@1499 -- # local bdfs 00:31:45.624 11:22:53 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:45.882 11:22:53 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:45.882 11:22:53 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:31:45.882 11:22:53 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:31:45.882 11:22:53 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:45.882 11:22:53 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:31:45.882 11:22:53 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:31:45.882 11:22:53 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:31:45.882 11:22:53 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:45.882 11:22:53 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:45.882 11:22:53 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:46.141 11:22:54 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:31:46.141 11:22:54 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:46.141 11:22:54 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:46.141 11:22:54 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:46.399 11:22:54 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:31:46.399 11:22:54 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:46.399 11:22:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:46.399 11:22:54 -- common/autotest_common.sh@10 -- # set +x 00:31:46.399 11:22:54 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:46.399 11:22:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:46.399 11:22:54 -- common/autotest_common.sh@10 -- # set +x 00:31:46.399 11:22:54 -- target/identify_passthru.sh@31 -- # nvmfpid=92259 00:31:46.399 11:22:54 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:46.399 11:22:54 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.399 11:22:54 -- target/identify_passthru.sh@35 -- # waitforlisten 92259 00:31:46.399 11:22:54 -- common/autotest_common.sh@817 -- # '[' -z 92259 ']' 00:31:46.399 11:22:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.399 11:22:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:46.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.399 11:22:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.399 11:22:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:46.399 11:22:54 -- common/autotest_common.sh@10 -- # set +x 00:31:46.657 [2024-04-18 11:22:54.641379] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:46.657 [2024-04-18 11:22:54.641571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.657 [2024-04-18 11:22:54.824148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.915 [2024-04-18 11:22:55.131987] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.915 [2024-04-18 11:22:55.132087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.915 [2024-04-18 11:22:55.132134] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.915 [2024-04-18 11:22:55.132150] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.915 [2024-04-18 11:22:55.132164] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.915 [2024-04-18 11:22:55.132380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.915 [2024-04-18 11:22:55.133159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.915 [2024-04-18 11:22:55.133302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.915 [2024-04-18 11:22:55.133319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.482 11:22:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:47.482 11:22:55 -- common/autotest_common.sh@850 -- # return 0 00:31:47.482 11:22:55 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:47.482 11:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.482 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:31:47.482 11:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.482 11:22:55 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:47.482 11:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.482 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:31:47.779 [2024-04-18 11:22:55.950541] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:47.779 11:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.779 11:22:55 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.779 11:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.779 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 [2024-04-18 11:22:55.968384] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.044 11:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:55 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:48.044 11:22:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:48.044 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 11:22:56 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:31:48.044 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.044 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 Nvme0n1 00:31:48.044 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:56 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:48.044 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.044 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:56 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.044 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.044 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:56 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.044 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.044 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 [2024-04-18 11:22:56.113537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.044 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:56 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:48.044 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.044 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.044 [2024-04-18 11:22:56.121184] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:48.044 [ 00:31:48.044 { 00:31:48.044 "allow_any_host": true, 00:31:48.044 "hosts": [], 00:31:48.044 "listen_addresses": [], 00:31:48.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:48.044 "subtype": "Discovery" 00:31:48.044 }, 00:31:48.044 { 00:31:48.044 "allow_any_host": true, 00:31:48.044 "hosts": [], 00:31:48.044 "listen_addresses": [ 00:31:48.044 { 00:31:48.044 "adrfam": "IPv4", 00:31:48.044 "traddr": "10.0.0.2", 00:31:48.044 "transport": "TCP", 00:31:48.044 "trsvcid": "4420", 00:31:48.044 "trtype": "TCP" 00:31:48.044 } 00:31:48.044 ], 00:31:48.044 "max_cntlid": 65519, 00:31:48.044 "max_namespaces": 1, 00:31:48.044 "min_cntlid": 1, 00:31:48.044 "model_number": "SPDK bdev Controller", 00:31:48.044 "namespaces": [ 00:31:48.044 { 00:31:48.044 "bdev_name": "Nvme0n1", 00:31:48.044 "name": "Nvme0n1", 00:31:48.044 "nguid": "02AEC590AB844EA78C880B8C7DD173ED", 00:31:48.044 "nsid": 1, 00:31:48.044 "uuid": "02aec590-ab84-4ea7-8c88-0b8c7dd173ed" 00:31:48.044 } 00:31:48.044 ], 00:31:48.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.044 "serial_number": "SPDK00000000000001", 00:31:48.044 "subtype": "NVMe" 00:31:48.044 } 00:31:48.044 ] 00:31:48.044 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.044 11:22:56 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.044 11:22:56 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:48.044 11:22:56 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:48.303 11:22:56 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:31:48.303 11:22:56 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.303 11:22:56 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:48.303 11:22:56 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:48.931 11:22:56 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:31:48.931 11:22:56 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:31:48.931 11:22:56 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:31:48.931 11:22:56 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.931 11:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.931 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:31:48.931 11:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.931 11:22:56 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:48.931 11:22:56 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:48.931 11:22:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:48.931 11:22:56 -- nvmf/common.sh@117 -- # sync 00:31:48.931 11:22:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.931 11:22:56 -- nvmf/common.sh@120 -- # set +e 00:31:48.931 11:22:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.931 11:22:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.931 rmmod nvme_tcp 00:31:48.931 rmmod nvme_fabrics 00:31:48.931 rmmod nvme_keyring 00:31:48.931 11:22:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.931 11:22:56 -- nvmf/common.sh@124 -- # set -e 00:31:48.931 11:22:56 -- nvmf/common.sh@125 -- # return 0 00:31:48.931 11:22:56 -- nvmf/common.sh@478 -- # '[' -n 92259 ']' 00:31:48.931 11:22:56 -- nvmf/common.sh@479 -- # killprocess 92259 00:31:48.931 11:22:56 -- common/autotest_common.sh@936 -- # '[' -z 92259 ']' 00:31:48.931 11:22:56 -- common/autotest_common.sh@940 -- # kill -0 92259 00:31:48.931 11:22:56 -- common/autotest_common.sh@941 -- # uname 00:31:48.931 11:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:48.931 11:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92259 00:31:48.931 killing process with pid 92259 00:31:48.931 11:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:48.931 11:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:48.931 11:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92259' 00:31:48.931 11:22:56 -- common/autotest_common.sh@955 -- # kill 92259 00:31:48.931 [2024-04-18 11:22:56.929581] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:48.931 11:22:56 -- common/autotest_common.sh@960 -- # wait 92259 00:31:50.307 11:22:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:50.307 11:22:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:50.307 11:22:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:50.307 11:22:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:50.307 11:22:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:50.307 11:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.307 11:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:50.307 11:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.307 11:22:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:50.307 00:31:50.307 real 0m4.927s 00:31:50.307 user 0m11.630s 00:31:50.307 sys 0m1.274s 00:31:50.307 11:22:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:50.307 11:22:58 -- common/autotest_common.sh@10 -- # set +x 00:31:50.307 ************************************ 00:31:50.307 END TEST nvmf_identify_passthru 00:31:50.307 ************************************ 00:31:50.307 11:22:58 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:50.307 11:22:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:50.307 11:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:50.307 11:22:58 -- common/autotest_common.sh@10 -- # set +x 00:31:50.307 ************************************ 00:31:50.307 START TEST nvmf_dif 00:31:50.307 ************************************ 00:31:50.307 11:22:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:50.307 * Looking for test storage... 00:31:50.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:50.307 11:22:58 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:50.307 11:22:58 -- nvmf/common.sh@7 -- # uname -s 00:31:50.307 11:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.307 11:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.307 11:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.307 11:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.307 11:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.307 11:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.307 11:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.307 11:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.307 11:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.307 11:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.307 11:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:50.307 11:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:31:50.307 11:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.307 11:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.307 11:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:50.307 11:22:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.307 11:22:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:50.307 11:22:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.307 11:22:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.307 11:22:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.307 11:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.307 11:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.307 11:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.307 11:22:58 -- paths/export.sh@5 -- # export PATH 00:31:50.307 11:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.307 11:22:58 -- nvmf/common.sh@47 -- # : 0 00:31:50.307 11:22:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:50.307 11:22:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:50.307 11:22:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.307 11:22:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.307 11:22:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.307 11:22:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:50.307 11:22:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:50.307 11:22:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:50.307 11:22:58 -- target/dif.sh@15 -- # NULL_META=16 00:31:50.307 11:22:58 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:50.308 11:22:58 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:50.308 11:22:58 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:50.308 11:22:58 -- target/dif.sh@135 -- # nvmftestinit 00:31:50.308 11:22:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:50.308 11:22:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.308 11:22:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:50.308 11:22:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:50.308 11:22:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:50.308 11:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.308 11:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:50.308 11:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.308 11:22:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:50.308 11:22:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:50.308 11:22:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:50.308 11:22:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:50.308 11:22:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:50.308 11:22:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:50.308 11:22:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.308 11:22:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.308 11:22:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:50.308 11:22:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:50.308 11:22:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:50.308 11:22:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:50.308 11:22:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:50.308 11:22:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.308 11:22:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:50.308 11:22:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:50.308 11:22:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:50.308 11:22:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:50.308 11:22:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:50.308 11:22:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:50.308 Cannot find device "nvmf_tgt_br" 00:31:50.308 11:22:58 -- nvmf/common.sh@155 -- # true 00:31:50.308 11:22:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:50.568 Cannot find device "nvmf_tgt_br2" 00:31:50.568 11:22:58 -- nvmf/common.sh@156 -- # true 00:31:50.568 11:22:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:50.568 11:22:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:50.568 Cannot find device "nvmf_tgt_br" 00:31:50.568 11:22:58 -- nvmf/common.sh@158 -- # true 00:31:50.568 11:22:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:50.568 Cannot find device "nvmf_tgt_br2" 00:31:50.568 11:22:58 -- nvmf/common.sh@159 -- # true 00:31:50.568 11:22:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:50.568 11:22:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:50.568 11:22:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:50.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:50.568 11:22:58 -- nvmf/common.sh@162 -- # true 00:31:50.568 11:22:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:50.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:50.568 11:22:58 -- nvmf/common.sh@163 -- # true 00:31:50.568 11:22:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:50.568 11:22:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:50.568 11:22:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:50.568 11:22:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:50.568 11:22:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:50.568 11:22:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:50.568 11:22:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:50.568 11:22:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:50.568 11:22:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:50.568 11:22:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:50.568 11:22:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:50.568 11:22:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:50.568 11:22:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:50.568 11:22:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:50.568 11:22:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:50.568 11:22:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:50.568 11:22:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:50.568 11:22:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:50.568 11:22:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:50.568 11:22:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:50.826 11:22:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:50.826 11:22:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:50.826 11:22:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:50.826 11:22:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:50.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:31:50.826 00:31:50.826 --- 10.0.0.2 ping statistics --- 00:31:50.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.826 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:50.826 11:22:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:50.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:50.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:31:50.826 00:31:50.826 --- 10.0.0.3 ping statistics --- 00:31:50.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.826 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:31:50.826 11:22:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:50.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:50.826 00:31:50.826 --- 10.0.0.1 ping statistics --- 00:31:50.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.826 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:50.826 11:22:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.826 11:22:58 -- nvmf/common.sh@422 -- # return 0 00:31:50.826 11:22:58 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:50.826 11:22:58 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:51.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:51.082 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:51.082 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:51.083 11:22:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.083 11:22:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:51.083 11:22:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:51.083 11:22:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.083 11:22:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:51.083 11:22:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:51.083 11:22:59 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:51.083 11:22:59 -- target/dif.sh@137 -- # nvmfappstart 00:31:51.083 11:22:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:51.083 11:22:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:51.083 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:31:51.083 11:22:59 -- nvmf/common.sh@470 -- # nvmfpid=92654 00:31:51.083 11:22:59 -- nvmf/common.sh@471 -- # waitforlisten 92654 00:31:51.083 11:22:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:51.083 11:22:59 -- common/autotest_common.sh@817 -- # '[' -z 92654 ']' 00:31:51.083 11:22:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.083 11:22:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.083 11:22:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.083 11:22:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:51.083 11:22:59 -- common/autotest_common.sh@10 -- # set +x 00:31:51.341 [2024-04-18 11:22:59.367942] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:51.341 [2024-04-18 11:22:59.368114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.341 [2024-04-18 11:22:59.549144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.908 [2024-04-18 11:22:59.849656] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.908 [2024-04-18 11:22:59.849750] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.908 [2024-04-18 11:22:59.849771] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.908 [2024-04-18 11:22:59.849797] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.908 [2024-04-18 11:22:59.849814] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.908 [2024-04-18 11:22:59.849858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.166 11:23:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:52.166 11:23:00 -- common/autotest_common.sh@850 -- # return 0 00:31:52.166 11:23:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:52.166 11:23:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:52.166 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 11:23:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.425 11:23:00 -- target/dif.sh@139 -- # create_transport 00:31:52.425 11:23:00 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:52.425 11:23:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.425 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 [2024-04-18 11:23:00.425674] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.425 11:23:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.425 11:23:00 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:52.425 11:23:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:52.425 11:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:52.425 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 ************************************ 00:31:52.425 START TEST fio_dif_1_default 00:31:52.425 ************************************ 00:31:52.425 11:23:00 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:31:52.425 11:23:00 -- target/dif.sh@86 -- # create_subsystems 0 00:31:52.425 11:23:00 -- target/dif.sh@28 -- # local sub 00:31:52.425 11:23:00 -- target/dif.sh@30 -- # for sub in "$@" 00:31:52.425 11:23:00 -- target/dif.sh@31 -- # create_subsystem 0 00:31:52.425 11:23:00 -- target/dif.sh@18 -- # local sub_id=0 00:31:52.425 11:23:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:52.425 11:23:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.425 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 bdev_null0 00:31:52.425 11:23:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.425 11:23:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:52.425 11:23:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.425 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 11:23:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.425 11:23:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:52.425 11:23:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.425 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.425 11:23:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.425 11:23:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:52.425 11:23:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.426 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.426 [2024-04-18 11:23:00.554007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.426 11:23:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.426 11:23:00 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:52.426 11:23:00 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:52.426 11:23:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:52.426 11:23:00 -- nvmf/common.sh@521 -- # config=() 00:31:52.426 11:23:00 -- nvmf/common.sh@521 -- # local subsystem config 00:31:52.426 11:23:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:52.426 11:23:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.426 11:23:00 -- target/dif.sh@82 -- # gen_fio_conf 00:31:52.426 11:23:00 -- target/dif.sh@54 -- # local file 00:31:52.426 11:23:00 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.426 11:23:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:52.426 { 00:31:52.426 "params": { 00:31:52.426 "name": "Nvme$subsystem", 00:31:52.426 "trtype": "$TEST_TRANSPORT", 00:31:52.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.426 "adrfam": "ipv4", 00:31:52.426 "trsvcid": "$NVMF_PORT", 00:31:52.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.426 "hdgst": ${hdgst:-false}, 00:31:52.426 "ddgst": ${ddgst:-false} 00:31:52.426 }, 00:31:52.426 "method": "bdev_nvme_attach_controller" 00:31:52.426 } 00:31:52.426 EOF 00:31:52.426 )") 00:31:52.426 11:23:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:52.426 11:23:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.426 11:23:00 -- target/dif.sh@56 -- # cat 00:31:52.426 11:23:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:52.426 11:23:00 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.426 11:23:00 -- common/autotest_common.sh@1327 -- # shift 00:31:52.426 11:23:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:52.426 11:23:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.426 11:23:00 -- nvmf/common.sh@543 -- # cat 00:31:52.426 11:23:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:52.426 11:23:00 -- target/dif.sh@72 -- # (( file <= files )) 00:31:52.426 11:23:00 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:52.426 11:23:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:52.426 11:23:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:52.426 11:23:00 -- nvmf/common.sh@545 -- # jq . 00:31:52.426 11:23:00 -- nvmf/common.sh@546 -- # IFS=, 00:31:52.426 11:23:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:52.426 "params": { 00:31:52.426 "name": "Nvme0", 00:31:52.426 "trtype": "tcp", 00:31:52.426 "traddr": "10.0.0.2", 00:31:52.426 "adrfam": "ipv4", 00:31:52.426 "trsvcid": "4420", 00:31:52.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.426 "hdgst": false, 00:31:52.426 "ddgst": false 00:31:52.426 }, 00:31:52.426 "method": "bdev_nvme_attach_controller" 00:31:52.426 }' 00:31:52.426 11:23:00 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:52.426 11:23:00 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:52.426 11:23:00 -- common/autotest_common.sh@1333 -- # break 00:31:52.426 11:23:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:52.426 11:23:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.690 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:52.690 fio-3.35 00:31:52.690 Starting 1 thread 00:32:04.924 00:32:04.924 filename0: (groupid=0, jobs=1): err= 0: pid=92736: Thu Apr 18 11:23:11 2024 00:32:04.924 read: IOPS=165, BW=661KiB/s (676kB/s)(6608KiB/10003msec) 00:32:04.924 slat (usec): min=8, max=100, avg=15.72, stdev=12.46 00:32:04.924 clat (usec): min=589, max=41908, avg=24167.70, stdev=19932.68 00:32:04.924 lat (usec): min=599, max=41967, avg=24183.42, stdev=19932.04 00:32:04.924 clat percentiles (usec): 00:32:04.924 | 1.00th=[ 611], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 668], 00:32:04.924 | 30.00th=[ 725], 40.00th=[ 840], 50.00th=[40633], 60.00th=[41157], 00:32:04.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:04.924 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:04.924 | 99.99th=[41681] 00:32:04.924 bw ( KiB/s): min= 480, max= 896, per=100.00%, avg=665.11, stdev=139.50, samples=19 00:32:04.924 iops : min= 120, max= 224, avg=166.26, stdev=34.89, samples=19 00:32:04.924 lat (usec) : 750=33.05%, 1000=8.60% 00:32:04.924 lat (msec) : 2=0.24%, 50=58.11% 00:32:04.924 cpu : usr=93.32%, sys=6.10%, ctx=60, majf=0, minf=1637 00:32:04.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.924 issued rwts: total=1652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:04.924 00:32:04.924 Run status group 0 (all jobs): 00:32:04.924 READ: bw=661KiB/s (676kB/s), 661KiB/s-661KiB/s (676kB/s-676kB/s), io=6608KiB (6767kB), run=10003-10003msec 00:32:04.924 ----------------------------------------------------- 00:32:04.924 Suppressions used: 00:32:04.924 count bytes template 00:32:04.924 1 8 /usr/src/fio/parse.c 00:32:04.924 1 8 libtcmalloc_minimal.so 00:32:04.924 1 904 libcrypto.so 00:32:04.924 ----------------------------------------------------- 00:32:04.924 00:32:04.924 11:23:12 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:04.924 11:23:12 -- target/dif.sh@43 -- # local sub 00:32:04.924 11:23:12 -- target/dif.sh@45 -- # for sub in "$@" 00:32:04.924 11:23:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:04.924 11:23:12 -- target/dif.sh@36 -- # local sub_id=0 00:32:04.924 11:23:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.924 11:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.924 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:32:04.924 11:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.924 11:23:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:04.924 11:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.924 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:32:04.924 ************************************ 00:32:04.924 END TEST fio_dif_1_default 00:32:04.924 ************************************ 00:32:04.924 11:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.924 00:32:04.924 real 0m12.394s 00:32:04.924 user 0m11.213s 00:32:04.924 sys 0m1.028s 00:32:04.924 11:23:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:04.924 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:32:04.924 11:23:12 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:04.924 11:23:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:04.924 11:23:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:04.924 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:32:04.924 ************************************ 00:32:04.924 START TEST fio_dif_1_multi_subsystems 00:32:04.924 ************************************ 00:32:04.924 11:23:13 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:32:04.924 11:23:13 -- target/dif.sh@92 -- # local files=1 00:32:04.924 11:23:13 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:04.924 11:23:13 -- target/dif.sh@28 -- # local sub 00:32:04.924 11:23:13 -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.924 11:23:13 -- target/dif.sh@31 -- # create_subsystem 0 00:32:04.924 11:23:13 -- target/dif.sh@18 -- # local sub_id=0 00:32:04.924 11:23:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:04.924 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.924 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.924 bdev_null0 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 [2024-04-18 11:23:13.059270] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.925 11:23:13 -- target/dif.sh@31 -- # create_subsystem 1 00:32:04.925 11:23:13 -- target/dif.sh@18 -- # local sub_id=1 00:32:04.925 11:23:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 bdev_null1 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.925 11:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.925 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:32:04.925 11:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.925 11:23:13 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:04.925 11:23:13 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:04.925 11:23:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:04.925 11:23:13 -- nvmf/common.sh@521 -- # config=() 00:32:04.925 11:23:13 -- nvmf/common.sh@521 -- # local subsystem config 00:32:04.925 11:23:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:04.925 11:23:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.925 11:23:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:04.925 { 00:32:04.925 "params": { 00:32:04.925 "name": "Nvme$subsystem", 00:32:04.925 "trtype": "$TEST_TRANSPORT", 00:32:04.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.925 "adrfam": "ipv4", 00:32:04.925 "trsvcid": "$NVMF_PORT", 00:32:04.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.925 "hdgst": ${hdgst:-false}, 00:32:04.925 "ddgst": ${ddgst:-false} 00:32:04.925 }, 00:32:04.925 "method": "bdev_nvme_attach_controller" 00:32:04.925 } 00:32:04.925 EOF 00:32:04.925 )") 00:32:04.925 11:23:13 -- target/dif.sh@82 -- # gen_fio_conf 00:32:04.925 11:23:13 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.925 11:23:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:04.925 11:23:13 -- target/dif.sh@54 -- # local file 00:32:04.925 11:23:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.925 11:23:13 -- nvmf/common.sh@543 -- # cat 00:32:04.925 11:23:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:04.925 11:23:13 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:04.925 11:23:13 -- common/autotest_common.sh@1327 -- # shift 00:32:04.925 11:23:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:04.925 11:23:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.925 11:23:13 -- target/dif.sh@56 -- # cat 00:32:04.925 11:23:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:04.925 11:23:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:04.925 { 00:32:04.925 "params": { 00:32:04.925 "name": "Nvme$subsystem", 00:32:04.925 "trtype": "$TEST_TRANSPORT", 00:32:04.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.925 "adrfam": "ipv4", 00:32:04.925 "trsvcid": "$NVMF_PORT", 00:32:04.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.925 "hdgst": ${hdgst:-false}, 00:32:04.925 "ddgst": ${ddgst:-false} 00:32:04.925 }, 00:32:04.925 "method": "bdev_nvme_attach_controller" 00:32:04.925 } 00:32:04.925 EOF 00:32:04.925 )") 00:32:04.925 11:23:13 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:04.925 11:23:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:04.925 11:23:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:04.925 11:23:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:04.925 11:23:13 -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.925 11:23:13 -- nvmf/common.sh@543 -- # cat 00:32:04.925 11:23:13 -- target/dif.sh@73 -- # cat 00:32:04.925 11:23:13 -- nvmf/common.sh@545 -- # jq . 00:32:04.925 11:23:13 -- target/dif.sh@72 -- # (( file++ )) 00:32:04.925 11:23:13 -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.925 11:23:13 -- nvmf/common.sh@546 -- # IFS=, 00:32:04.925 11:23:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:04.925 "params": { 00:32:04.925 "name": "Nvme0", 00:32:04.925 "trtype": "tcp", 00:32:04.925 "traddr": "10.0.0.2", 00:32:04.925 "adrfam": "ipv4", 00:32:04.925 "trsvcid": "4420", 00:32:04.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.925 "hdgst": false, 00:32:04.925 "ddgst": false 00:32:04.925 }, 00:32:04.925 "method": "bdev_nvme_attach_controller" 00:32:04.925 },{ 00:32:04.925 "params": { 00:32:04.925 "name": "Nvme1", 00:32:04.925 "trtype": "tcp", 00:32:04.925 "traddr": "10.0.0.2", 00:32:04.925 "adrfam": "ipv4", 00:32:04.925 "trsvcid": "4420", 00:32:04.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:04.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:04.925 "hdgst": false, 00:32:04.925 "ddgst": false 00:32:04.925 }, 00:32:04.925 "method": "bdev_nvme_attach_controller" 00:32:04.925 }' 00:32:04.925 11:23:13 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:04.925 11:23:13 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:04.925 11:23:13 -- common/autotest_common.sh@1333 -- # break 00:32:04.925 11:23:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:04.925 11:23:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.184 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:05.184 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:05.184 fio-3.35 00:32:05.184 Starting 2 threads 00:32:17.382 00:32:17.382 filename0: (groupid=0, jobs=1): err= 0: pid=92911: Thu Apr 18 11:23:24 2024 00:32:17.382 read: IOPS=119, BW=479KiB/s (491kB/s)(4800KiB/10016msec) 00:32:17.382 slat (nsec): min=8127, max=74168, avg=15857.31, stdev=10911.01 00:32:17.382 clat (usec): min=552, max=42013, avg=33331.86, stdev=15949.71 00:32:17.382 lat (usec): min=561, max=42059, avg=33347.71, stdev=15949.15 00:32:17.382 clat percentiles (usec): 00:32:17.382 | 1.00th=[ 586], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[40633], 00:32:17.382 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:17.382 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:32:17.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:17.382 | 99.99th=[42206] 00:32:17.382 bw ( KiB/s): min= 384, max= 833, per=48.70%, avg=478.45, stdev=93.34, samples=20 00:32:17.382 iops : min= 96, max= 208, avg=119.60, stdev=23.28, samples=20 00:32:17.382 lat (usec) : 750=14.17%, 1000=3.50% 00:32:17.382 lat (msec) : 2=1.33%, 10=0.33%, 50=80.67% 00:32:17.382 cpu : usr=95.40%, sys=4.02%, ctx=75, majf=0, minf=1637 00:32:17.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.382 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:17.382 filename1: (groupid=0, jobs=1): err= 0: pid=92912: Thu Apr 18 11:23:24 2024 00:32:17.382 read: IOPS=125, BW=503KiB/s (515kB/s)(5040KiB/10026msec) 00:32:17.382 slat (usec): min=8, max=119, avg=17.46, stdev=13.36 00:32:17.382 clat (usec): min=549, max=42966, avg=31768.73, stdev=17013.83 00:32:17.382 lat (usec): min=558, max=43019, avg=31786.19, stdev=17013.59 00:32:17.382 clat percentiles (usec): 00:32:17.382 | 1.00th=[ 594], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 873], 00:32:17.382 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:17.383 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:17.383 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:17.383 | 99.99th=[42730] 00:32:17.383 bw ( KiB/s): min= 384, max= 800, per=51.15%, avg=502.40, stdev=89.37, samples=20 00:32:17.383 iops : min= 96, max= 200, avg=125.60, stdev=22.34, samples=20 00:32:17.383 lat (usec) : 750=15.00%, 1000=5.71% 00:32:17.383 lat (msec) : 2=2.14%, 10=0.32%, 50=76.83% 00:32:17.383 cpu : usr=95.15%, sys=3.94%, ctx=113, majf=0, minf=1637 00:32:17.383 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.383 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.383 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:17.383 00:32:17.383 Run status group 0 (all jobs): 00:32:17.383 READ: bw=981KiB/s (1005kB/s), 479KiB/s-503KiB/s (491kB/s-515kB/s), io=9840KiB (10.1MB), run=10016-10026msec 00:32:17.642 ----------------------------------------------------- 00:32:17.642 Suppressions used: 00:32:17.642 count bytes template 00:32:17.642 2 16 /usr/src/fio/parse.c 00:32:17.642 1 8 libtcmalloc_minimal.so 00:32:17.642 1 904 libcrypto.so 00:32:17.642 ----------------------------------------------------- 00:32:17.642 00:32:17.642 11:23:25 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:17.642 11:23:25 -- target/dif.sh@43 -- # local sub 00:32:17.642 11:23:25 -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.642 11:23:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:17.642 11:23:25 -- target/dif.sh@36 -- # local sub_id=0 00:32:17.642 11:23:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.642 11:23:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:17.642 11:23:25 -- target/dif.sh@36 -- # local sub_id=1 00:32:17.642 11:23:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 ************************************ 00:32:17.642 END TEST fio_dif_1_multi_subsystems 00:32:17.642 ************************************ 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 00:32:17.642 real 0m12.688s 00:32:17.642 user 0m21.275s 00:32:17.642 sys 0m1.227s 00:32:17.642 11:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:17.642 11:23:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:17.642 11:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 ************************************ 00:32:17.642 START TEST fio_dif_rand_params 00:32:17.642 ************************************ 00:32:17.642 11:23:25 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:32:17.642 11:23:25 -- target/dif.sh@100 -- # local NULL_DIF 00:32:17.642 11:23:25 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:17.642 11:23:25 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:17.642 11:23:25 -- target/dif.sh@103 -- # bs=128k 00:32:17.642 11:23:25 -- target/dif.sh@103 -- # numjobs=3 00:32:17.642 11:23:25 -- target/dif.sh@103 -- # iodepth=3 00:32:17.642 11:23:25 -- target/dif.sh@103 -- # runtime=5 00:32:17.642 11:23:25 -- target/dif.sh@105 -- # create_subsystems 0 00:32:17.642 11:23:25 -- target/dif.sh@28 -- # local sub 00:32:17.642 11:23:25 -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.642 11:23:25 -- target/dif.sh@31 -- # create_subsystem 0 00:32:17.642 11:23:25 -- target/dif.sh@18 -- # local sub_id=0 00:32:17.642 11:23:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 bdev_null0 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.642 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.642 11:23:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.642 11:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.642 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:32:17.901 [2024-04-18 11:23:25.864747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.901 11:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.902 11:23:25 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:17.902 11:23:25 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:17.902 11:23:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:17.902 11:23:25 -- nvmf/common.sh@521 -- # config=() 00:32:17.902 11:23:25 -- nvmf/common.sh@521 -- # local subsystem config 00:32:17.902 11:23:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.902 11:23:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:17.902 11:23:25 -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.902 11:23:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:17.902 { 00:32:17.902 "params": { 00:32:17.902 "name": "Nvme$subsystem", 00:32:17.902 "trtype": "$TEST_TRANSPORT", 00:32:17.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.902 "adrfam": "ipv4", 00:32:17.902 "trsvcid": "$NVMF_PORT", 00:32:17.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.902 "hdgst": ${hdgst:-false}, 00:32:17.902 "ddgst": ${ddgst:-false} 00:32:17.902 }, 00:32:17.902 "method": "bdev_nvme_attach_controller" 00:32:17.902 } 00:32:17.902 EOF 00:32:17.902 )") 00:32:17.902 11:23:25 -- target/dif.sh@54 -- # local file 00:32:17.902 11:23:25 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.902 11:23:25 -- target/dif.sh@56 -- # cat 00:32:17.902 11:23:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:17.902 11:23:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.902 11:23:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:17.902 11:23:25 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:17.902 11:23:25 -- common/autotest_common.sh@1327 -- # shift 00:32:17.902 11:23:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:17.902 11:23:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.902 11:23:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.902 11:23:25 -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.902 11:23:25 -- nvmf/common.sh@543 -- # cat 00:32:17.902 11:23:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:17.902 11:23:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:17.902 11:23:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:17.902 11:23:25 -- nvmf/common.sh@545 -- # jq . 00:32:17.902 11:23:25 -- nvmf/common.sh@546 -- # IFS=, 00:32:17.902 11:23:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:17.902 "params": { 00:32:17.902 "name": "Nvme0", 00:32:17.902 "trtype": "tcp", 00:32:17.902 "traddr": "10.0.0.2", 00:32:17.902 "adrfam": "ipv4", 00:32:17.902 "trsvcid": "4420", 00:32:17.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.902 "hdgst": false, 00:32:17.902 "ddgst": false 00:32:17.902 }, 00:32:17.902 "method": "bdev_nvme_attach_controller" 00:32:17.902 }' 00:32:17.902 11:23:25 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:17.902 11:23:25 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:17.902 11:23:25 -- common/autotest_common.sh@1333 -- # break 00:32:17.902 11:23:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:17.902 11:23:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:18.160 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:18.160 ... 00:32:18.160 fio-3.35 00:32:18.160 Starting 3 threads 00:32:24.759 00:32:24.759 filename0: (groupid=0, jobs=1): err= 0: pid=93077: Thu Apr 18 11:23:32 2024 00:32:24.759 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(123MiB/5005msec) 00:32:24.759 slat (nsec): min=6107, max=51438, avg=18825.75, stdev=7142.05 00:32:24.759 clat (usec): min=6983, max=58368, avg=15181.56, stdev=5882.15 00:32:24.759 lat (usec): min=7001, max=58391, avg=15200.39, stdev=5882.05 00:32:24.759 clat percentiles (usec): 00:32:24.759 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[13435], 00:32:24.759 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:32:24.759 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:32:24.759 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:32:24.759 | 99.99th=[58459] 00:32:24.759 bw ( KiB/s): min=21547, max=30976, per=32.66%, avg=25194.70, stdev=2803.80, samples=10 00:32:24.759 iops : min= 168, max= 242, avg=196.80, stdev=21.95, samples=10 00:32:24.759 lat (msec) : 10=10.13%, 20=88.04%, 100=1.82% 00:32:24.759 cpu : usr=92.65%, sys=5.86%, ctx=46, majf=0, minf=1637 00:32:24.759 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.759 filename0: (groupid=0, jobs=1): err= 0: pid=93078: Thu Apr 18 11:23:32 2024 00:32:24.759 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(121MiB/5006msec) 00:32:24.759 slat (nsec): min=6122, max=49233, avg=14956.58, stdev=6948.72 00:32:24.759 clat (usec): min=5024, max=19768, avg=15459.38, stdev=3449.95 00:32:24.759 lat (usec): min=5034, max=19789, avg=15474.33, stdev=3449.67 00:32:24.759 clat percentiles (usec): 00:32:24.759 | 1.00th=[ 5145], 5.00th=[ 6849], 10.00th=[10421], 20.00th=[12518], 00:32:24.759 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:32:24.759 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18220], 95.00th=[18744], 00:32:24.759 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:32:24.759 | 99.99th=[19792] 00:32:24.759 bw ( KiB/s): min=22272, max=27648, per=32.06%, avg=24729.60, stdev=1766.21, samples=10 00:32:24.759 iops : min= 174, max= 216, avg=193.20, stdev=13.80, samples=10 00:32:24.759 lat (msec) : 10=7.74%, 20=92.26% 00:32:24.759 cpu : usr=92.51%, sys=6.03%, ctx=7, majf=0, minf=1635 00:32:24.759 IO depths : 1=33.2%, 2=66.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 issued rwts: total=969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.759 filename0: (groupid=0, jobs=1): err= 0: pid=93079: Thu Apr 18 11:23:32 2024 00:32:24.759 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(133MiB/5008msec) 00:32:24.759 slat (nsec): min=5845, max=76872, avg=17912.82, stdev=6470.45 00:32:24.759 clat (usec): min=6918, max=56218, avg=14115.05, stdev=7663.48 00:32:24.759 lat (usec): min=6935, max=56251, avg=14132.96, stdev=7663.61 00:32:24.759 clat percentiles (usec): 00:32:24.759 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[11076], 20.00th=[11863], 00:32:24.759 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:32:24.759 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[14746], 00:32:24.759 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:32:24.759 | 99.99th=[56361] 00:32:24.759 bw ( KiB/s): min=22272, max=30976, per=35.15%, avg=27110.40, stdev=2378.49, samples=10 00:32:24.759 iops : min= 174, max= 242, avg=211.80, stdev=18.58, samples=10 00:32:24.759 lat (msec) : 10=4.14%, 20=92.18%, 50=0.38%, 100=3.30% 00:32:24.759 cpu : usr=92.81%, sys=5.67%, ctx=10, majf=0, minf=1637 00:32:24.759 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.759 issued rwts: total=1062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.759 00:32:24.759 Run status group 0 (all jobs): 00:32:24.759 READ: bw=75.3MiB/s (79.0MB/s), 24.2MiB/s-26.5MiB/s (25.4MB/s-27.8MB/s), io=377MiB (396MB), run=5005-5008msec 00:32:25.037 ----------------------------------------------------- 00:32:25.037 Suppressions used: 00:32:25.037 count bytes template 00:32:25.037 5 44 /usr/src/fio/parse.c 00:32:25.037 1 8 libtcmalloc_minimal.so 00:32:25.037 1 904 libcrypto.so 00:32:25.037 ----------------------------------------------------- 00:32:25.037 00:32:25.297 11:23:33 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:25.297 11:23:33 -- target/dif.sh@43 -- # local sub 00:32:25.297 11:23:33 -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.297 11:23:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.297 11:23:33 -- target/dif.sh@36 -- # local sub_id=0 00:32:25.297 11:23:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.297 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.297 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.297 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.297 11:23:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.297 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.297 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.297 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # bs=4k 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # numjobs=8 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # iodepth=16 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # runtime= 00:32:25.297 11:23:33 -- target/dif.sh@109 -- # files=2 00:32:25.297 11:23:33 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:25.297 11:23:33 -- target/dif.sh@28 -- # local sub 00:32:25.297 11:23:33 -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.297 11:23:33 -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.297 11:23:33 -- target/dif.sh@18 -- # local sub_id=0 00:32:25.297 11:23:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:25.297 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.297 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 bdev_null0 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 [2024-04-18 11:23:33.308267] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.298 11:23:33 -- target/dif.sh@31 -- # create_subsystem 1 00:32:25.298 11:23:33 -- target/dif.sh@18 -- # local sub_id=1 00:32:25.298 11:23:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 bdev_null1 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.298 11:23:33 -- target/dif.sh@31 -- # create_subsystem 2 00:32:25.298 11:23:33 -- target/dif.sh@18 -- # local sub_id=2 00:32:25.298 11:23:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 bdev_null2 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:25.298 11:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.298 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:32:25.298 11:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.298 11:23:33 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:25.298 11:23:33 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:25.298 11:23:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:25.298 11:23:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.298 11:23:33 -- nvmf/common.sh@521 -- # config=() 00:32:25.298 11:23:33 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.298 11:23:33 -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.298 11:23:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:25.298 11:23:33 -- nvmf/common.sh@521 -- # local subsystem config 00:32:25.298 11:23:33 -- target/dif.sh@54 -- # local file 00:32:25.298 11:23:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.298 11:23:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:25.298 11:23:33 -- target/dif.sh@56 -- # cat 00:32:25.298 11:23:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:25.298 11:23:33 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:25.298 { 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme$subsystem", 00:32:25.298 "trtype": "$TEST_TRANSPORT", 00:32:25.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "$NVMF_PORT", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.298 "hdgst": ${hdgst:-false}, 00:32:25.298 "ddgst": ${ddgst:-false} 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 } 00:32:25.298 EOF 00:32:25.298 )") 00:32:25.298 11:23:33 -- common/autotest_common.sh@1327 -- # shift 00:32:25.298 11:23:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:25.298 11:23:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # cat 00:32:25.298 11:23:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.298 11:23:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.298 11:23:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:25.298 11:23:33 -- target/dif.sh@73 -- # cat 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file++ )) 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.298 11:23:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:25.298 { 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme$subsystem", 00:32:25.298 "trtype": "$TEST_TRANSPORT", 00:32:25.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "$NVMF_PORT", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.298 "hdgst": ${hdgst:-false}, 00:32:25.298 "ddgst": ${ddgst:-false} 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 } 00:32:25.298 EOF 00:32:25.298 )") 00:32:25.298 11:23:33 -- target/dif.sh@73 -- # cat 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # cat 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file++ )) 00:32:25.298 11:23:33 -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.298 11:23:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:25.298 { 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme$subsystem", 00:32:25.298 "trtype": "$TEST_TRANSPORT", 00:32:25.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "$NVMF_PORT", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.298 "hdgst": ${hdgst:-false}, 00:32:25.298 "ddgst": ${ddgst:-false} 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 } 00:32:25.298 EOF 00:32:25.298 )") 00:32:25.298 11:23:33 -- nvmf/common.sh@543 -- # cat 00:32:25.298 11:23:33 -- nvmf/common.sh@545 -- # jq . 00:32:25.298 11:23:33 -- nvmf/common.sh@546 -- # IFS=, 00:32:25.298 11:23:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme0", 00:32:25.298 "trtype": "tcp", 00:32:25.298 "traddr": "10.0.0.2", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "4420", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.298 "hdgst": false, 00:32:25.298 "ddgst": false 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 },{ 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme1", 00:32:25.298 "trtype": "tcp", 00:32:25.298 "traddr": "10.0.0.2", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "4420", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.298 "hdgst": false, 00:32:25.298 "ddgst": false 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 },{ 00:32:25.298 "params": { 00:32:25.298 "name": "Nvme2", 00:32:25.298 "trtype": "tcp", 00:32:25.298 "traddr": "10.0.0.2", 00:32:25.298 "adrfam": "ipv4", 00:32:25.298 "trsvcid": "4420", 00:32:25.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:25.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:25.298 "hdgst": false, 00:32:25.298 "ddgst": false 00:32:25.298 }, 00:32:25.298 "method": "bdev_nvme_attach_controller" 00:32:25.298 }' 00:32:25.298 11:23:33 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:25.299 11:23:33 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:25.299 11:23:33 -- common/autotest_common.sh@1333 -- # break 00:32:25.299 11:23:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:25.299 11:23:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.557 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.557 ... 00:32:25.557 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.557 ... 00:32:25.557 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.557 ... 00:32:25.557 fio-3.35 00:32:25.557 Starting 24 threads 00:32:37.756 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93184: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=191, BW=766KiB/s (785kB/s)(7712KiB/10065msec) 00:32:37.756 slat (usec): min=6, max=11039, avg=26.00, stdev=294.07 00:32:37.756 clat (msec): min=8, max=184, avg=83.26, stdev=26.59 00:32:37.756 lat (msec): min=8, max=184, avg=83.29, stdev=26.58 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 13], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 64], 00:32:37.756 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 88], 00:32:37.756 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 136], 00:32:37.756 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 184], 00:32:37.756 | 99.99th=[ 184] 00:32:37.756 bw ( KiB/s): min= 552, max= 1024, per=4.84%, avg=764.60, stdev=109.10, samples=20 00:32:37.756 iops : min= 138, max= 256, avg=191.15, stdev=27.27, samples=20 00:32:37.756 lat (msec) : 10=0.83%, 20=0.83%, 50=3.84%, 100=71.68%, 250=22.82% 00:32:37.756 cpu : usr=42.09%, sys=1.20%, ctx=1344, majf=0, minf=1637 00:32:37.756 IO depths : 1=1.8%, 2=3.8%, 4=11.5%, 8=71.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:32:37.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93185: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=152, BW=611KiB/s (626kB/s)(6120KiB/10013msec) 00:32:37.756 slat (usec): min=4, max=8035, avg=45.07, stdev=501.42 00:32:37.756 clat (msec): min=47, max=210, avg=104.43, stdev=26.55 00:32:37.756 lat (msec): min=47, max=210, avg=104.48, stdev=26.54 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 85], 00:32:37.756 | 30.00th=[ 96], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 108], 00:32:37.756 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 150], 00:32:37.756 | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 211], 00:32:37.756 | 99.99th=[ 211] 00:32:37.756 bw ( KiB/s): min= 384, max= 816, per=3.86%, avg=609.63, stdev=99.50, samples=19 00:32:37.756 iops : min= 96, max= 204, avg=152.32, stdev=24.85, samples=19 00:32:37.756 lat (msec) : 50=1.70%, 100=49.35%, 250=48.95% 00:32:37.756 cpu : usr=33.58%, sys=0.88%, ctx=937, majf=0, minf=1634 00:32:37.756 IO depths : 1=1.9%, 2=4.0%, 4=11.5%, 8=71.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:32:37.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93186: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=160, BW=641KiB/s (656kB/s)(6424KiB/10029msec) 00:32:37.756 slat (usec): min=7, max=8033, avg=18.64, stdev=200.18 00:32:37.756 clat (msec): min=37, max=239, avg=99.66, stdev=27.82 00:32:37.756 lat (msec): min=37, max=239, avg=99.68, stdev=27.82 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 78], 00:32:37.756 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 101], 00:32:37.756 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 144], 00:32:37.756 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 241], 99.95th=[ 241], 00:32:37.756 | 99.99th=[ 241] 00:32:37.756 bw ( KiB/s): min= 512, max= 824, per=4.05%, avg=640.05, stdev=85.26, samples=20 00:32:37.756 iops : min= 128, max= 206, avg=160.00, stdev=21.31, samples=20 00:32:37.756 lat (msec) : 50=1.62%, 100=58.28%, 250=40.10% 00:32:37.756 cpu : usr=38.77%, sys=1.13%, ctx=1133, majf=0, minf=1636 00:32:37.756 IO depths : 1=2.4%, 2=5.0%, 4=14.6%, 8=67.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:37.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 complete : 0=0.0%, 4=90.9%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93187: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=145, BW=581KiB/s (595kB/s)(5824KiB/10026msec) 00:32:37.756 slat (nsec): min=5684, max=50254, avg=14461.25, stdev=4971.48 00:32:37.756 clat (msec): min=46, max=232, avg=110.06, stdev=28.95 00:32:37.756 lat (msec): min=46, max=232, avg=110.07, stdev=28.95 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 56], 5.00th=[ 72], 10.00th=[ 83], 20.00th=[ 95], 00:32:37.756 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 108], 00:32:37.756 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 165], 00:32:37.756 | 99.00th=[ 213], 99.50th=[ 213], 99.90th=[ 232], 99.95th=[ 232], 00:32:37.756 | 99.99th=[ 232] 00:32:37.756 bw ( KiB/s): min= 384, max= 768, per=3.62%, avg=572.68, stdev=96.53, samples=19 00:32:37.756 iops : min= 96, max= 192, avg=143.16, stdev=24.14, samples=19 00:32:37.756 lat (msec) : 50=0.34%, 100=43.61%, 250=56.04% 00:32:37.756 cpu : usr=32.47%, sys=1.05%, ctx=916, majf=0, minf=1636 00:32:37.756 IO depths : 1=3.4%, 2=7.6%, 4=19.1%, 8=60.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:37.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 complete : 0=0.0%, 4=92.4%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93188: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=147, BW=592KiB/s (606kB/s)(5936KiB/10028msec) 00:32:37.756 slat (usec): min=8, max=4033, avg=19.49, stdev=147.51 00:32:37.756 clat (msec): min=47, max=184, avg=107.93, stdev=27.14 00:32:37.756 lat (msec): min=47, max=184, avg=107.95, stdev=27.14 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 56], 5.00th=[ 65], 10.00th=[ 74], 20.00th=[ 85], 00:32:37.756 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 106], 60.00th=[ 112], 00:32:37.756 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 142], 95.00th=[ 153], 00:32:37.756 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 184], 00:32:37.756 | 99.99th=[ 184] 00:32:37.756 bw ( KiB/s): min= 384, max= 728, per=3.74%, avg=591.05, stdev=88.90, samples=19 00:32:37.756 iops : min= 96, max= 182, avg=147.74, stdev=22.21, samples=19 00:32:37.756 lat (msec) : 50=0.47%, 100=41.64%, 250=57.88% 00:32:37.756 cpu : usr=41.55%, sys=1.04%, ctx=1240, majf=0, minf=1636 00:32:37.756 IO depths : 1=4.1%, 2=8.8%, 4=20.1%, 8=58.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:32:37.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.756 issued rwts: total=1484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.756 filename0: (groupid=0, jobs=1): err= 0: pid=93189: Thu Apr 18 11:23:44 2024 00:32:37.756 read: IOPS=173, BW=695KiB/s (712kB/s)(7000KiB/10067msec) 00:32:37.756 slat (usec): min=5, max=8046, avg=23.10, stdev=226.57 00:32:37.756 clat (msec): min=19, max=185, avg=91.73, stdev=30.09 00:32:37.756 lat (msec): min=19, max=185, avg=91.76, stdev=30.09 00:32:37.756 clat percentiles (msec): 00:32:37.756 | 1.00th=[ 38], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 69], 00:32:37.756 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 96], 00:32:37.756 | 70.00th=[ 103], 80.00th=[ 113], 90.00th=[ 132], 95.00th=[ 153], 00:32:37.756 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 186], 00:32:37.756 | 99.99th=[ 186] 00:32:37.757 bw ( KiB/s): min= 512, max= 896, per=4.39%, avg=693.40, stdev=117.62, samples=20 00:32:37.757 iops : min= 128, max= 224, avg=173.35, stdev=29.41, samples=20 00:32:37.757 lat (msec) : 20=0.80%, 50=1.94%, 100=65.77%, 250=31.49% 00:32:37.757 cpu : usr=37.02%, sys=1.05%, ctx=1204, majf=0, minf=1635 00:32:37.757 IO depths : 1=2.1%, 2=4.5%, 4=13.0%, 8=69.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename0: (groupid=0, jobs=1): err= 0: pid=93190: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=151, BW=606KiB/s (620kB/s)(6064KiB/10014msec) 00:32:37.757 slat (usec): min=4, max=8037, avg=28.06, stdev=318.46 00:32:37.757 clat (msec): min=45, max=251, avg=105.49, stdev=30.37 00:32:37.757 lat (msec): min=45, max=251, avg=105.52, stdev=30.37 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 48], 5.00th=[ 60], 10.00th=[ 70], 20.00th=[ 87], 00:32:37.757 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 106], 00:32:37.757 | 70.00th=[ 115], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 155], 00:32:37.757 | 99.00th=[ 205], 99.50th=[ 230], 99.90th=[ 253], 99.95th=[ 253], 00:32:37.757 | 99.99th=[ 253] 00:32:37.757 bw ( KiB/s): min= 344, max= 816, per=3.77%, avg=596.95, stdev=109.68, samples=19 00:32:37.757 iops : min= 86, max= 204, avg=149.21, stdev=27.41, samples=19 00:32:37.757 lat (msec) : 50=1.72%, 100=48.42%, 250=49.54%, 500=0.33% 00:32:37.757 cpu : usr=41.59%, sys=1.22%, ctx=1235, majf=0, minf=1636 00:32:37.757 IO depths : 1=1.8%, 2=3.8%, 4=12.5%, 8=70.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename0: (groupid=0, jobs=1): err= 0: pid=93191: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=179, BW=716KiB/s (733kB/s)(7192KiB/10041msec) 00:32:37.757 slat (usec): min=5, max=7030, avg=22.74, stdev=213.10 00:32:37.757 clat (msec): min=27, max=173, avg=89.00, stdev=28.46 00:32:37.757 lat (msec): min=27, max=173, avg=89.02, stdev=28.46 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 47], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 64], 00:32:37.757 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 92], 00:32:37.757 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 146], 00:32:37.757 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:32:37.757 | 99.99th=[ 174] 00:32:37.757 bw ( KiB/s): min= 488, max= 1005, per=4.53%, avg=715.85, stdev=137.66, samples=20 00:32:37.757 iops : min= 122, max= 251, avg=178.95, stdev=34.39, samples=20 00:32:37.757 lat (msec) : 50=2.17%, 100=69.02%, 250=28.81% 00:32:37.757 cpu : usr=46.93%, sys=1.42%, ctx=1333, majf=0, minf=1636 00:32:37.757 IO depths : 1=1.4%, 2=3.1%, 4=9.7%, 8=73.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename1: (groupid=0, jobs=1): err= 0: pid=93192: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=144, BW=580KiB/s (594kB/s)(5804KiB/10008msec) 00:32:37.757 slat (usec): min=5, max=8047, avg=25.72, stdev=298.46 00:32:37.757 clat (msec): min=45, max=207, avg=110.20, stdev=26.98 00:32:37.757 lat (msec): min=45, max=207, avg=110.23, stdev=26.98 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 57], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 95], 00:32:37.757 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 111], 00:32:37.757 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 163], 00:32:37.757 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:32:37.757 | 99.99th=[ 209] 00:32:37.757 bw ( KiB/s): min= 384, max= 680, per=3.60%, avg=569.74, stdev=81.06, samples=19 00:32:37.757 iops : min= 96, max= 170, avg=142.37, stdev=20.24, samples=19 00:32:37.757 lat (msec) : 50=0.62%, 100=44.31%, 250=55.07% 00:32:37.757 cpu : usr=35.92%, sys=0.79%, ctx=1094, majf=0, minf=1636 00:32:37.757 IO depths : 1=2.2%, 2=5.3%, 4=15.4%, 8=66.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=91.4%, 8=3.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename1: (groupid=0, jobs=1): err= 0: pid=93193: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=153, BW=615KiB/s (630kB/s)(6156KiB/10009msec) 00:32:37.757 slat (usec): min=5, max=4033, avg=16.81, stdev=102.62 00:32:37.757 clat (msec): min=46, max=200, avg=103.91, stdev=25.81 00:32:37.757 lat (msec): min=46, max=200, avg=103.93, stdev=25.81 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 64], 5.00th=[ 66], 10.00th=[ 77], 20.00th=[ 87], 00:32:37.757 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 104], 00:32:37.757 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 140], 95.00th=[ 155], 00:32:37.757 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:32:37.757 | 99.99th=[ 201] 00:32:37.757 bw ( KiB/s): min= 384, max= 816, per=3.84%, avg=606.79, stdev=101.16, samples=19 00:32:37.757 iops : min= 96, max= 204, avg=151.58, stdev=25.25, samples=19 00:32:37.757 lat (msec) : 50=0.13%, 100=55.04%, 250=44.83% 00:32:37.757 cpu : usr=42.99%, sys=1.13%, ctx=1327, majf=0, minf=1634 00:32:37.757 IO depths : 1=3.4%, 2=7.2%, 4=17.2%, 8=63.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename1: (groupid=0, jobs=1): err= 0: pid=93194: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=190, BW=761KiB/s (779kB/s)(7644KiB/10042msec) 00:32:37.757 slat (usec): min=5, max=8074, avg=18.11, stdev=184.49 00:32:37.757 clat (msec): min=2, max=191, avg=83.97, stdev=29.62 00:32:37.757 lat (msec): min=2, max=191, avg=83.99, stdev=29.62 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 58], 20.00th=[ 61], 00:32:37.757 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 93], 00:32:37.757 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:32:37.757 | 99.00th=[ 159], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 192], 00:32:37.757 | 99.99th=[ 192] 00:32:37.757 bw ( KiB/s): min= 512, max= 1488, per=4.79%, avg=758.00, stdev=200.49, samples=20 00:32:37.757 iops : min= 128, max= 372, avg=189.50, stdev=50.12, samples=20 00:32:37.757 lat (msec) : 4=1.41%, 10=1.10%, 20=2.51%, 50=2.72%, 100=64.99% 00:32:37.757 lat (msec) : 250=27.26% 00:32:37.757 cpu : usr=32.63%, sys=1.00%, ctx=919, majf=0, minf=1637 00:32:37.757 IO depths : 1=0.6%, 2=1.4%, 4=6.8%, 8=77.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=89.4%, 8=6.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename1: (groupid=0, jobs=1): err= 0: pid=93195: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=168, BW=672KiB/s (688kB/s)(6768KiB/10070msec) 00:32:37.757 slat (usec): min=8, max=8034, avg=25.30, stdev=292.30 00:32:37.757 clat (msec): min=50, max=191, avg=94.93, stdev=24.43 00:32:37.757 lat (msec): min=50, max=191, avg=94.96, stdev=24.43 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:32:37.757 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 97], 00:32:37.757 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 128], 95.00th=[ 142], 00:32:37.757 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:32:37.757 | 99.99th=[ 192] 00:32:37.757 bw ( KiB/s): min= 440, max= 864, per=4.24%, avg=670.40, stdev=102.99, samples=20 00:32:37.757 iops : min= 110, max= 216, avg=167.60, stdev=25.75, samples=20 00:32:37.757 lat (msec) : 100=65.43%, 250=34.57% 00:32:37.757 cpu : usr=37.84%, sys=1.09%, ctx=1154, majf=0, minf=1637 00:32:37.757 IO depths : 1=2.8%, 2=6.2%, 4=15.7%, 8=65.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.757 filename1: (groupid=0, jobs=1): err= 0: pid=93196: Thu Apr 18 11:23:44 2024 00:32:37.757 read: IOPS=143, BW=574KiB/s (588kB/s)(5760KiB/10035msec) 00:32:37.757 slat (usec): min=6, max=8039, avg=27.47, stdev=260.88 00:32:37.757 clat (msec): min=43, max=215, avg=111.31, stdev=27.53 00:32:37.757 lat (msec): min=43, max=215, avg=111.33, stdev=27.54 00:32:37.757 clat percentiles (msec): 00:32:37.757 | 1.00th=[ 44], 5.00th=[ 61], 10.00th=[ 84], 20.00th=[ 95], 00:32:37.757 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 117], 00:32:37.757 | 70.00th=[ 123], 80.00th=[ 132], 90.00th=[ 146], 95.00th=[ 157], 00:32:37.757 | 99.00th=[ 188], 99.50th=[ 205], 99.90th=[ 215], 99.95th=[ 215], 00:32:37.757 | 99.99th=[ 215] 00:32:37.757 bw ( KiB/s): min= 512, max= 675, per=3.58%, avg=565.63, stdev=65.81, samples=19 00:32:37.757 iops : min= 128, max= 168, avg=141.37, stdev=16.38, samples=19 00:32:37.757 lat (msec) : 50=2.15%, 100=36.88%, 250=60.97% 00:32:37.757 cpu : usr=35.58%, sys=1.00%, ctx=1070, majf=0, minf=1634 00:32:37.757 IO depths : 1=3.9%, 2=8.4%, 4=20.2%, 8=58.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:32:37.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.757 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename1: (groupid=0, jobs=1): err= 0: pid=93197: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=148, BW=593KiB/s (607kB/s)(5952KiB/10039msec) 00:32:37.758 slat (usec): min=9, max=4038, avg=24.08, stdev=204.53 00:32:37.758 clat (msec): min=49, max=192, avg=107.55, stdev=26.54 00:32:37.758 lat (msec): min=49, max=192, avg=107.58, stdev=26.54 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 54], 5.00th=[ 66], 10.00th=[ 78], 20.00th=[ 92], 00:32:37.758 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 107], 00:32:37.758 | 70.00th=[ 114], 80.00th=[ 131], 90.00th=[ 144], 95.00th=[ 157], 00:32:37.758 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:32:37.758 | 99.99th=[ 192] 00:32:37.758 bw ( KiB/s): min= 384, max= 680, per=3.70%, avg=585.89, stdev=73.16, samples=19 00:32:37.758 iops : min= 96, max= 170, avg=146.47, stdev=18.29, samples=19 00:32:37.758 lat (msec) : 50=0.40%, 100=43.82%, 250=55.78% 00:32:37.758 cpu : usr=40.00%, sys=1.16%, ctx=1171, majf=0, minf=1634 00:32:37.758 IO depths : 1=3.6%, 2=7.9%, 4=19.8%, 8=59.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename1: (groupid=0, jobs=1): err= 0: pid=93198: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=194, BW=776KiB/s (795kB/s)(7812KiB/10064msec) 00:32:37.758 slat (usec): min=6, max=4026, avg=15.55, stdev=91.01 00:32:37.758 clat (msec): min=2, max=215, avg=82.25, stdev=28.01 00:32:37.758 lat (msec): min=2, max=215, avg=82.26, stdev=28.01 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 63], 00:32:37.758 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 87], 00:32:37.758 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 130], 00:32:37.758 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 215], 99.95th=[ 215], 00:32:37.758 | 99.99th=[ 215] 00:32:37.758 bw ( KiB/s): min= 512, max= 1280, per=4.90%, avg=774.65, stdev=158.72, samples=20 00:32:37.758 iops : min= 128, max= 320, avg=193.65, stdev=39.69, samples=20 00:32:37.758 lat (msec) : 4=0.82%, 10=0.82%, 20=1.64%, 50=2.56%, 100=75.06% 00:32:37.758 lat (msec) : 250=19.10% 00:32:37.758 cpu : usr=37.75%, sys=0.98%, ctx=1060, majf=0, minf=1635 00:32:37.758 IO depths : 1=1.4%, 2=2.9%, 4=9.9%, 8=73.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename1: (groupid=0, jobs=1): err= 0: pid=93199: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=173, BW=693KiB/s (709kB/s)(6948KiB/10028msec) 00:32:37.758 slat (usec): min=5, max=8026, avg=20.97, stdev=215.20 00:32:37.758 clat (msec): min=45, max=179, avg=92.25, stdev=25.78 00:32:37.758 lat (msec): min=45, max=179, avg=92.27, stdev=25.77 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 69], 00:32:37.758 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 96], 00:32:37.758 | 70.00th=[ 103], 80.00th=[ 112], 90.00th=[ 131], 95.00th=[ 144], 00:32:37.758 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 180], 00:32:37.758 | 99.99th=[ 180] 00:32:37.758 bw ( KiB/s): min= 512, max= 912, per=4.36%, avg=688.40, stdev=111.44, samples=20 00:32:37.758 iops : min= 128, max= 228, avg=172.10, stdev=27.86, samples=20 00:32:37.758 lat (msec) : 50=1.09%, 100=68.11%, 250=30.80% 00:32:37.758 cpu : usr=41.91%, sys=1.11%, ctx=1168, majf=0, minf=1635 00:32:37.758 IO depths : 1=1.5%, 2=3.3%, 4=10.9%, 8=72.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename2: (groupid=0, jobs=1): err= 0: pid=93200: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=184, BW=738KiB/s (756kB/s)(7432KiB/10070msec) 00:32:37.758 slat (usec): min=9, max=8035, avg=22.30, stdev=263.07 00:32:37.758 clat (msec): min=9, max=191, avg=86.50, stdev=29.43 00:32:37.758 lat (msec): min=9, max=191, avg=86.52, stdev=29.42 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 62], 00:32:37.758 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 96], 00:32:37.758 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 133], 00:32:37.758 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 192], 00:32:37.758 | 99.99th=[ 192] 00:32:37.758 bw ( KiB/s): min= 472, max= 1384, per=4.66%, avg=736.70, stdev=185.98, samples=20 00:32:37.758 iops : min= 118, max= 346, avg=184.15, stdev=46.49, samples=20 00:32:37.758 lat (msec) : 10=0.86%, 20=2.58%, 50=5.06%, 100=65.66%, 250=25.83% 00:32:37.758 cpu : usr=33.85%, sys=0.92%, ctx=947, majf=0, minf=1635 00:32:37.758 IO depths : 1=0.9%, 2=1.9%, 4=8.3%, 8=76.0%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename2: (groupid=0, jobs=1): err= 0: pid=93201: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=182, BW=730KiB/s (748kB/s)(7316KiB/10020msec) 00:32:37.758 slat (usec): min=7, max=8036, avg=18.46, stdev=187.71 00:32:37.758 clat (msec): min=32, max=179, avg=87.56, stdev=23.47 00:32:37.758 lat (msec): min=32, max=179, avg=87.58, stdev=23.47 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 69], 00:32:37.758 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 95], 00:32:37.758 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:32:37.758 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 180], 99.95th=[ 180], 00:32:37.758 | 99.99th=[ 180] 00:32:37.758 bw ( KiB/s): min= 512, max= 864, per=4.59%, avg=725.25, stdev=100.82, samples=20 00:32:37.758 iops : min= 128, max= 216, avg=181.30, stdev=25.20, samples=20 00:32:37.758 lat (msec) : 50=3.44%, 100=72.94%, 250=23.62% 00:32:37.758 cpu : usr=32.70%, sys=0.94%, ctx=899, majf=0, minf=1636 00:32:37.758 IO depths : 1=1.0%, 2=2.5%, 4=9.0%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename2: (groupid=0, jobs=1): err= 0: pid=93202: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=152, BW=609KiB/s (624kB/s)(6108KiB/10031msec) 00:32:37.758 slat (usec): min=7, max=8039, avg=23.75, stdev=290.36 00:32:37.758 clat (msec): min=32, max=241, avg=104.89, stdev=31.16 00:32:37.758 lat (msec): min=32, max=241, avg=104.91, stdev=31.15 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 38], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 83], 00:32:37.758 | 30.00th=[ 87], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 108], 00:32:37.758 | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 146], 95.00th=[ 157], 00:32:37.758 | 99.00th=[ 205], 99.50th=[ 207], 99.90th=[ 243], 99.95th=[ 243], 00:32:37.758 | 99.99th=[ 243] 00:32:37.758 bw ( KiB/s): min= 509, max= 792, per=3.81%, avg=602.79, stdev=93.63, samples=19 00:32:37.758 iops : min= 127, max= 198, avg=150.63, stdev=23.39, samples=19 00:32:37.758 lat (msec) : 50=1.51%, 100=52.91%, 250=45.58% 00:32:37.758 cpu : usr=33.23%, sys=1.01%, ctx=929, majf=0, minf=1636 00:32:37.758 IO depths : 1=2.6%, 2=6.0%, 4=16.1%, 8=64.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename2: (groupid=0, jobs=1): err= 0: pid=93203: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=147, BW=592KiB/s (606kB/s)(5940KiB/10034msec) 00:32:37.758 slat (usec): min=5, max=8032, avg=20.46, stdev=208.23 00:32:37.758 clat (msec): min=42, max=204, avg=107.96, stdev=25.02 00:32:37.758 lat (msec): min=42, max=204, avg=107.98, stdev=25.02 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 50], 5.00th=[ 68], 10.00th=[ 84], 20.00th=[ 93], 00:32:37.758 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 105], 60.00th=[ 111], 00:32:37.758 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 148], 00:32:37.758 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 205], 99.95th=[ 205], 00:32:37.758 | 99.99th=[ 205] 00:32:37.758 bw ( KiB/s): min= 512, max= 696, per=3.70%, avg=584.68, stdev=68.22, samples=19 00:32:37.758 iops : min= 128, max= 174, avg=146.16, stdev=17.07, samples=19 00:32:37.758 lat (msec) : 50=1.35%, 100=44.44%, 250=54.21% 00:32:37.758 cpu : usr=34.12%, sys=0.88%, ctx=993, majf=0, minf=1636 00:32:37.758 IO depths : 1=3.0%, 2=6.5%, 4=16.7%, 8=64.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:32:37.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.758 issued rwts: total=1485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.758 filename2: (groupid=0, jobs=1): err= 0: pid=93204: Thu Apr 18 11:23:44 2024 00:32:37.758 read: IOPS=176, BW=706KiB/s (723kB/s)(7112KiB/10068msec) 00:32:37.758 slat (usec): min=5, max=12042, avg=58.78, stdev=668.53 00:32:37.758 clat (msec): min=37, max=206, avg=89.96, stdev=28.26 00:32:37.758 lat (msec): min=37, max=206, avg=90.02, stdev=28.29 00:32:37.758 clat percentiles (msec): 00:32:37.758 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 68], 00:32:37.758 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 94], 00:32:37.758 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 128], 95.00th=[ 144], 00:32:37.759 | 99.00th=[ 178], 99.50th=[ 207], 99.90th=[ 207], 99.95th=[ 207], 00:32:37.759 | 99.99th=[ 207] 00:32:37.759 bw ( KiB/s): min= 312, max= 890, per=4.46%, avg=704.50, stdev=136.27, samples=20 00:32:37.759 iops : min= 78, max= 222, avg=176.10, stdev=34.03, samples=20 00:32:37.759 lat (msec) : 50=3.54%, 100=69.97%, 250=26.49% 00:32:37.759 cpu : usr=33.85%, sys=1.07%, ctx=1000, majf=0, minf=1637 00:32:37.759 IO depths : 1=1.1%, 2=2.4%, 4=8.9%, 8=74.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:37.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 issued rwts: total=1778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.759 filename2: (groupid=0, jobs=1): err= 0: pid=93205: Thu Apr 18 11:23:44 2024 00:32:37.759 read: IOPS=174, BW=697KiB/s (714kB/s)(6972KiB/10002msec) 00:32:37.759 slat (usec): min=5, max=4026, avg=16.24, stdev=96.32 00:32:37.759 clat (msec): min=37, max=172, avg=91.68, stdev=26.16 00:32:37.759 lat (msec): min=38, max=172, avg=91.70, stdev=26.16 00:32:37.759 clat percentiles (msec): 00:32:37.759 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 67], 00:32:37.759 | 30.00th=[ 73], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:32:37.759 | 70.00th=[ 101], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 144], 00:32:37.759 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:32:37.759 | 99.99th=[ 174] 00:32:37.759 bw ( KiB/s): min= 480, max= 944, per=4.41%, avg=697.21, stdev=132.45, samples=19 00:32:37.759 iops : min= 120, max= 236, avg=174.21, stdev=33.12, samples=19 00:32:37.759 lat (msec) : 50=1.38%, 100=68.62%, 250=30.01% 00:32:37.759 cpu : usr=43.34%, sys=1.27%, ctx=1658, majf=0, minf=1634 00:32:37.759 IO depths : 1=2.2%, 2=4.6%, 4=12.6%, 8=69.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:32:37.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.759 filename2: (groupid=0, jobs=1): err= 0: pid=93206: Thu Apr 18 11:23:44 2024 00:32:37.759 read: IOPS=168, BW=672KiB/s (689kB/s)(6752KiB/10042msec) 00:32:37.759 slat (usec): min=6, max=5031, avg=17.15, stdev=122.27 00:32:37.759 clat (msec): min=46, max=175, avg=95.06, stdev=25.44 00:32:37.759 lat (msec): min=46, max=175, avg=95.08, stdev=25.44 00:32:37.759 clat percentiles (msec): 00:32:37.759 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 70], 00:32:37.759 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 99], 00:32:37.759 | 70.00th=[ 107], 80.00th=[ 116], 90.00th=[ 131], 95.00th=[ 144], 00:32:37.759 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 176], 99.95th=[ 176], 00:32:37.759 | 99.99th=[ 176] 00:32:37.759 bw ( KiB/s): min= 512, max= 944, per=4.23%, avg=668.95, stdev=128.70, samples=20 00:32:37.759 iops : min= 128, max= 236, avg=167.15, stdev=32.16, samples=20 00:32:37.759 lat (msec) : 50=0.24%, 100=63.68%, 250=36.08% 00:32:37.759 cpu : usr=43.42%, sys=1.29%, ctx=1506, majf=0, minf=1636 00:32:37.759 IO depths : 1=2.5%, 2=5.5%, 4=14.6%, 8=67.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:32:37.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.759 filename2: (groupid=0, jobs=1): err= 0: pid=93207: Thu Apr 18 11:23:44 2024 00:32:37.759 read: IOPS=157, BW=629KiB/s (645kB/s)(6316KiB/10034msec) 00:32:37.759 slat (usec): min=9, max=8036, avg=23.92, stdev=285.49 00:32:37.759 clat (msec): min=37, max=201, avg=101.36, stdev=28.44 00:32:37.759 lat (msec): min=37, max=201, avg=101.38, stdev=28.44 00:32:37.759 clat percentiles (msec): 00:32:37.759 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 79], 00:32:37.759 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 104], 00:32:37.759 | 70.00th=[ 117], 80.00th=[ 128], 90.00th=[ 140], 95.00th=[ 155], 00:32:37.759 | 99.00th=[ 171], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:32:37.759 | 99.99th=[ 201] 00:32:37.759 bw ( KiB/s): min= 510, max= 864, per=3.98%, avg=628.80, stdev=97.69, samples=20 00:32:37.759 iops : min= 127, max= 216, avg=157.15, stdev=24.45, samples=20 00:32:37.759 lat (msec) : 50=1.96%, 100=55.67%, 250=42.37% 00:32:37.759 cpu : usr=32.32%, sys=0.99%, ctx=887, majf=0, minf=1636 00:32:37.759 IO depths : 1=2.4%, 2=5.4%, 4=14.2%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:32:37.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.759 issued rwts: total=1579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.759 00:32:37.759 Run status group 0 (all jobs): 00:32:37.759 READ: bw=15.4MiB/s (16.2MB/s), 574KiB/s-776KiB/s (588kB/s-795kB/s), io=155MiB (163MB), run=10002-10070msec 00:32:38.017 ----------------------------------------------------- 00:32:38.017 Suppressions used: 00:32:38.017 count bytes template 00:32:38.017 45 402 /usr/src/fio/parse.c 00:32:38.017 1 8 libtcmalloc_minimal.so 00:32:38.017 1 904 libcrypto.so 00:32:38.017 ----------------------------------------------------- 00:32:38.017 00:32:38.276 11:23:46 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:38.276 11:23:46 -- target/dif.sh@43 -- # local sub 00:32:38.276 11:23:46 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.276 11:23:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:38.276 11:23:46 -- target/dif.sh@36 -- # local sub_id=0 00:32:38.276 11:23:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:38.276 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.276 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.276 11:23:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:38.276 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.276 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.276 11:23:46 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.276 11:23:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:38.276 11:23:46 -- target/dif.sh@36 -- # local sub_id=1 00:32:38.276 11:23:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.276 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.276 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.276 11:23:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:38.276 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.276 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.276 11:23:46 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.276 11:23:46 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:38.276 11:23:46 -- target/dif.sh@36 -- # local sub_id=2 00:32:38.276 11:23:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:38.276 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # numjobs=2 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # iodepth=8 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # runtime=5 00:32:38.277 11:23:46 -- target/dif.sh@115 -- # files=1 00:32:38.277 11:23:46 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:38.277 11:23:46 -- target/dif.sh@28 -- # local sub 00:32:38.277 11:23:46 -- target/dif.sh@30 -- # for sub in "$@" 00:32:38.277 11:23:46 -- target/dif.sh@31 -- # create_subsystem 0 00:32:38.277 11:23:46 -- target/dif.sh@18 -- # local sub_id=0 00:32:38.277 11:23:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 bdev_null0 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 [2024-04-18 11:23:46.337968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@30 -- # for sub in "$@" 00:32:38.277 11:23:46 -- target/dif.sh@31 -- # create_subsystem 1 00:32:38.277 11:23:46 -- target/dif.sh@18 -- # local sub_id=1 00:32:38.277 11:23:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 bdev_null1 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.277 11:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.277 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:32:38.277 11:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.277 11:23:46 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:38.277 11:23:46 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:38.277 11:23:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:38.277 11:23:46 -- nvmf/common.sh@521 -- # config=() 00:32:38.277 11:23:46 -- nvmf/common.sh@521 -- # local subsystem config 00:32:38.277 11:23:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:38.277 11:23:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:38.277 { 00:32:38.277 "params": { 00:32:38.277 "name": "Nvme$subsystem", 00:32:38.277 "trtype": "$TEST_TRANSPORT", 00:32:38.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.277 "adrfam": "ipv4", 00:32:38.277 "trsvcid": "$NVMF_PORT", 00:32:38.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.277 "hdgst": ${hdgst:-false}, 00:32:38.277 "ddgst": ${ddgst:-false} 00:32:38.277 }, 00:32:38.277 "method": "bdev_nvme_attach_controller" 00:32:38.277 } 00:32:38.277 EOF 00:32:38.277 )") 00:32:38.277 11:23:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.277 11:23:46 -- target/dif.sh@82 -- # gen_fio_conf 00:32:38.277 11:23:46 -- target/dif.sh@54 -- # local file 00:32:38.277 11:23:46 -- target/dif.sh@56 -- # cat 00:32:38.277 11:23:46 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.277 11:23:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:38.277 11:23:46 -- nvmf/common.sh@543 -- # cat 00:32:38.277 11:23:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:38.277 11:23:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:38.277 11:23:46 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:38.277 11:23:46 -- common/autotest_common.sh@1327 -- # shift 00:32:38.277 11:23:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:38.277 11:23:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.277 11:23:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:38.277 11:23:46 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:38.277 11:23:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:38.277 11:23:46 -- target/dif.sh@72 -- # (( file <= files )) 00:32:38.277 11:23:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:38.277 11:23:46 -- target/dif.sh@73 -- # cat 00:32:38.277 11:23:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:38.277 11:23:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:38.277 { 00:32:38.277 "params": { 00:32:38.277 "name": "Nvme$subsystem", 00:32:38.277 "trtype": "$TEST_TRANSPORT", 00:32:38.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.277 "adrfam": "ipv4", 00:32:38.277 "trsvcid": "$NVMF_PORT", 00:32:38.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.277 "hdgst": ${hdgst:-false}, 00:32:38.277 "ddgst": ${ddgst:-false} 00:32:38.277 }, 00:32:38.277 "method": "bdev_nvme_attach_controller" 00:32:38.277 } 00:32:38.277 EOF 00:32:38.277 )") 00:32:38.277 11:23:46 -- nvmf/common.sh@543 -- # cat 00:32:38.277 11:23:46 -- target/dif.sh@72 -- # (( file++ )) 00:32:38.277 11:23:46 -- target/dif.sh@72 -- # (( file <= files )) 00:32:38.277 11:23:46 -- nvmf/common.sh@545 -- # jq . 00:32:38.277 11:23:46 -- nvmf/common.sh@546 -- # IFS=, 00:32:38.277 11:23:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:38.277 "params": { 00:32:38.277 "name": "Nvme0", 00:32:38.277 "trtype": "tcp", 00:32:38.277 "traddr": "10.0.0.2", 00:32:38.277 "adrfam": "ipv4", 00:32:38.277 "trsvcid": "4420", 00:32:38.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.277 "hdgst": false, 00:32:38.277 "ddgst": false 00:32:38.277 }, 00:32:38.277 "method": "bdev_nvme_attach_controller" 00:32:38.277 },{ 00:32:38.277 "params": { 00:32:38.277 "name": "Nvme1", 00:32:38.277 "trtype": "tcp", 00:32:38.277 "traddr": "10.0.0.2", 00:32:38.277 "adrfam": "ipv4", 00:32:38.277 "trsvcid": "4420", 00:32:38.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:38.277 "hdgst": false, 00:32:38.277 "ddgst": false 00:32:38.277 }, 00:32:38.277 "method": "bdev_nvme_attach_controller" 00:32:38.277 }' 00:32:38.277 11:23:46 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:38.277 11:23:46 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:38.277 11:23:46 -- common/autotest_common.sh@1333 -- # break 00:32:38.277 11:23:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:38.277 11:23:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.536 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:38.536 ... 00:32:38.536 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:38.536 ... 00:32:38.536 fio-3.35 00:32:38.536 Starting 4 threads 00:32:45.148 00:32:45.148 filename0: (groupid=0, jobs=1): err= 0: pid=93343: Thu Apr 18 11:23:52 2024 00:32:45.148 read: IOPS=1584, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5004msec) 00:32:45.148 slat (nsec): min=8618, max=77381, avg=13181.56, stdev=5887.27 00:32:45.148 clat (usec): min=3890, max=8967, avg=4980.59, stdev=178.19 00:32:45.148 lat (usec): min=3907, max=9016, avg=4993.77, stdev=178.90 00:32:45.148 clat percentiles (usec): 00:32:45.148 | 1.00th=[ 4817], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 4883], 00:32:45.148 | 30.00th=[ 4948], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:32:45.148 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5080], 95.00th=[ 5145], 00:32:45.148 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 8848], 99.95th=[ 8979], 00:32:45.148 | 99.99th=[ 8979] 00:32:45.148 bw ( KiB/s): min=12416, max=12928, per=24.97%, avg=12657.78, stdev=162.47, samples=9 00:32:45.148 iops : min= 1552, max= 1616, avg=1582.22, stdev=20.31, samples=9 00:32:45.148 lat (msec) : 4=0.03%, 10=99.97% 00:32:45.148 cpu : usr=93.92%, sys=4.76%, ctx=71, majf=0, minf=1637 00:32:45.148 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 issued rwts: total=7928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.148 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:45.148 filename0: (groupid=0, jobs=1): err= 0: pid=93344: Thu Apr 18 11:23:52 2024 00:32:45.148 read: IOPS=1584, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5004msec) 00:32:45.148 slat (nsec): min=4555, max=89793, avg=15969.00, stdev=6174.34 00:32:45.148 clat (usec): min=4688, max=8258, avg=4971.24, stdev=168.31 00:32:45.148 lat (usec): min=4706, max=8277, avg=4987.21, stdev=167.66 00:32:45.148 clat percentiles (usec): 00:32:45.148 | 1.00th=[ 4752], 5.00th=[ 4817], 10.00th=[ 4817], 20.00th=[ 4883], 00:32:45.148 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:32:45.148 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:32:45.148 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 8225], 99.95th=[ 8225], 00:32:45.148 | 99.99th=[ 8291] 00:32:45.148 bw ( KiB/s): min=12440, max=12928, per=24.97%, avg=12660.44, stdev=170.60, samples=9 00:32:45.148 iops : min= 1555, max= 1616, avg=1582.56, stdev=21.33, samples=9 00:32:45.148 lat (msec) : 10=100.00% 00:32:45.148 cpu : usr=94.08%, sys=4.66%, ctx=24, majf=0, minf=1637 00:32:45.148 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 issued rwts: total=7928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.148 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:45.148 filename1: (groupid=0, jobs=1): err= 0: pid=93345: Thu Apr 18 11:23:52 2024 00:32:45.148 read: IOPS=1584, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5003msec) 00:32:45.148 slat (nsec): min=5753, max=82977, avg=18469.64, stdev=5248.67 00:32:45.148 clat (usec): min=3828, max=7723, avg=4954.41, stdev=153.15 00:32:45.148 lat (usec): min=3847, max=7757, avg=4972.88, stdev=153.73 00:32:45.148 clat percentiles (usec): 00:32:45.148 | 1.00th=[ 4752], 5.00th=[ 4817], 10.00th=[ 4817], 20.00th=[ 4883], 00:32:45.148 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:32:45.148 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:32:45.148 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 7570], 99.95th=[ 7635], 00:32:45.148 | 99.99th=[ 7701] 00:32:45.148 bw ( KiB/s): min=12416, max=12928, per=24.99%, avg=12672.00, stdev=192.00, samples=9 00:32:45.148 iops : min= 1552, max= 1616, avg=1584.00, stdev=24.00, samples=9 00:32:45.148 lat (msec) : 4=0.01%, 10=99.99% 00:32:45.148 cpu : usr=93.76%, sys=4.94%, ctx=24, majf=0, minf=1635 00:32:45.148 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 issued rwts: total=7928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.148 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:45.148 filename1: (groupid=0, jobs=1): err= 0: pid=93346: Thu Apr 18 11:23:52 2024 00:32:45.148 read: IOPS=1584, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5002msec) 00:32:45.148 slat (nsec): min=6047, max=66634, avg=17578.85, stdev=4925.31 00:32:45.148 clat (usec): min=3462, max=7757, avg=4957.63, stdev=142.97 00:32:45.148 lat (usec): min=3479, max=7782, avg=4975.21, stdev=143.62 00:32:45.148 clat percentiles (usec): 00:32:45.148 | 1.00th=[ 4752], 5.00th=[ 4817], 10.00th=[ 4817], 20.00th=[ 4883], 00:32:45.148 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:32:45.148 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:32:45.148 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 7635], 99.95th=[ 7701], 00:32:45.148 | 99.99th=[ 7767] 00:32:45.148 bw ( KiB/s): min=12416, max=12928, per=24.99%, avg=12672.00, stdev=169.33, samples=9 00:32:45.148 iops : min= 1552, max= 1616, avg=1584.00, stdev=21.17, samples=9 00:32:45.148 lat (msec) : 4=0.08%, 10=99.92% 00:32:45.148 cpu : usr=93.92%, sys=4.88%, ctx=8, majf=0, minf=1637 00:32:45.148 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.148 issued rwts: total=7928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.148 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:45.148 00:32:45.148 Run status group 0 (all jobs): 00:32:45.148 READ: bw=49.5MiB/s (51.9MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=248MiB (260MB), run=5002-5004msec 00:32:45.715 ----------------------------------------------------- 00:32:45.715 Suppressions used: 00:32:45.715 count bytes template 00:32:45.715 6 52 /usr/src/fio/parse.c 00:32:45.715 1 8 libtcmalloc_minimal.so 00:32:45.715 1 904 libcrypto.so 00:32:45.715 ----------------------------------------------------- 00:32:45.715 00:32:45.715 11:23:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:45.715 11:23:53 -- target/dif.sh@43 -- # local sub 00:32:45.715 11:23:53 -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.715 11:23:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:45.715 11:23:53 -- target/dif.sh@36 -- # local sub_id=0 00:32:45.715 11:23:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.715 11:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.715 11:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.715 11:23:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:45.715 11:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.715 11:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.715 11:23:53 -- target/dif.sh@45 -- # for sub in "$@" 00:32:45.715 11:23:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:45.715 11:23:53 -- target/dif.sh@36 -- # local sub_id=1 00:32:45.715 11:23:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.715 11:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.715 11:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.715 11:23:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:45.715 11:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.715 ************************************ 00:32:45.715 END TEST fio_dif_rand_params 00:32:45.715 ************************************ 00:32:45.715 11:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.715 00:32:45.715 real 0m28.042s 00:32:45.715 user 2m10.627s 00:32:45.715 sys 0m5.763s 00:32:45.715 11:23:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.715 11:23:53 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:45.715 11:23:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:45.715 11:23:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:45.715 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:32:45.974 ************************************ 00:32:45.974 START TEST fio_dif_digest 00:32:45.974 ************************************ 00:32:45.974 11:23:54 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:32:45.974 11:23:54 -- target/dif.sh@123 -- # local NULL_DIF 00:32:45.974 11:23:54 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:45.974 11:23:54 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:45.974 11:23:54 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:45.974 11:23:54 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:45.974 11:23:54 -- target/dif.sh@127 -- # numjobs=3 00:32:45.974 11:23:54 -- target/dif.sh@127 -- # iodepth=3 00:32:45.974 11:23:54 -- target/dif.sh@127 -- # runtime=10 00:32:45.974 11:23:54 -- target/dif.sh@128 -- # hdgst=true 00:32:45.974 11:23:54 -- target/dif.sh@128 -- # ddgst=true 00:32:45.974 11:23:54 -- target/dif.sh@130 -- # create_subsystems 0 00:32:45.974 11:23:54 -- target/dif.sh@28 -- # local sub 00:32:45.974 11:23:54 -- target/dif.sh@30 -- # for sub in "$@" 00:32:45.974 11:23:54 -- target/dif.sh@31 -- # create_subsystem 0 00:32:45.974 11:23:54 -- target/dif.sh@18 -- # local sub_id=0 00:32:45.974 11:23:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:45.974 11:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.974 11:23:54 -- common/autotest_common.sh@10 -- # set +x 00:32:45.974 bdev_null0 00:32:45.974 11:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.975 11:23:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:45.975 11:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.975 11:23:54 -- common/autotest_common.sh@10 -- # set +x 00:32:45.975 11:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.975 11:23:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:45.975 11:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.975 11:23:54 -- common/autotest_common.sh@10 -- # set +x 00:32:45.975 11:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.975 11:23:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.975 11:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.975 11:23:54 -- common/autotest_common.sh@10 -- # set +x 00:32:45.975 [2024-04-18 11:23:54.039356] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.975 11:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.975 11:23:54 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:45.975 11:23:54 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:45.975 11:23:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:45.975 11:23:54 -- nvmf/common.sh@521 -- # config=() 00:32:45.975 11:23:54 -- nvmf/common.sh@521 -- # local subsystem config 00:32:45.975 11:23:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:45.975 11:23:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.975 11:23:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:45.975 { 00:32:45.975 "params": { 00:32:45.975 "name": "Nvme$subsystem", 00:32:45.975 "trtype": "$TEST_TRANSPORT", 00:32:45.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.975 "adrfam": "ipv4", 00:32:45.975 "trsvcid": "$NVMF_PORT", 00:32:45.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.975 "hdgst": ${hdgst:-false}, 00:32:45.975 "ddgst": ${ddgst:-false} 00:32:45.975 }, 00:32:45.975 "method": "bdev_nvme_attach_controller" 00:32:45.975 } 00:32:45.975 EOF 00:32:45.975 )") 00:32:45.975 11:23:54 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:45.975 11:23:54 -- target/dif.sh@82 -- # gen_fio_conf 00:32:45.975 11:23:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:45.975 11:23:54 -- target/dif.sh@54 -- # local file 00:32:45.975 11:23:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.975 11:23:54 -- target/dif.sh@56 -- # cat 00:32:45.975 11:23:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:45.975 11:23:54 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:45.975 11:23:54 -- common/autotest_common.sh@1327 -- # shift 00:32:45.975 11:23:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:45.975 11:23:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.975 11:23:54 -- nvmf/common.sh@543 -- # cat 00:32:45.975 11:23:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:45.975 11:23:54 -- target/dif.sh@72 -- # (( file <= files )) 00:32:45.975 11:23:54 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:45.975 11:23:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:45.975 11:23:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:45.975 11:23:54 -- nvmf/common.sh@545 -- # jq . 00:32:45.975 11:23:54 -- nvmf/common.sh@546 -- # IFS=, 00:32:45.975 11:23:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:45.975 "params": { 00:32:45.975 "name": "Nvme0", 00:32:45.975 "trtype": "tcp", 00:32:45.975 "traddr": "10.0.0.2", 00:32:45.975 "adrfam": "ipv4", 00:32:45.975 "trsvcid": "4420", 00:32:45.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.975 "hdgst": true, 00:32:45.975 "ddgst": true 00:32:45.975 }, 00:32:45.975 "method": "bdev_nvme_attach_controller" 00:32:45.975 }' 00:32:45.975 11:23:54 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:45.975 11:23:54 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:45.975 11:23:54 -- common/autotest_common.sh@1333 -- # break 00:32:45.975 11:23:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:45.975 11:23:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.233 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:46.233 ... 00:32:46.233 fio-3.35 00:32:46.233 Starting 3 threads 00:32:58.430 00:32:58.430 filename0: (groupid=0, jobs=1): err= 0: pid=93463: Thu Apr 18 11:24:05 2024 00:32:58.430 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(265MiB/10010msec) 00:32:58.430 slat (nsec): min=6596, max=47950, avg=19884.41, stdev=3670.96 00:32:58.430 clat (usec): min=10192, max=21332, avg=14127.08, stdev=1146.87 00:32:58.430 lat (usec): min=10210, max=21366, avg=14146.97, stdev=1146.86 00:32:58.430 clat percentiles (usec): 00:32:58.430 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:32:58.430 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:32:58.430 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:32:58.430 | 99.00th=[16450], 99.50th=[16909], 99.90th=[20579], 99.95th=[21365], 00:32:58.430 | 99.99th=[21365] 00:32:58.430 bw ( KiB/s): min=25344, max=28160, per=39.78%, avg=27146.53, stdev=591.64, samples=19 00:32:58.430 iops : min= 198, max= 220, avg=212.05, stdev= 4.60, samples=19 00:32:58.430 lat (msec) : 20=99.86%, 50=0.14% 00:32:58.430 cpu : usr=92.28%, sys=6.15%, ctx=13, majf=0, minf=1637 00:32:58.430 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.430 filename0: (groupid=0, jobs=1): err= 0: pid=93464: Thu Apr 18 11:24:05 2024 00:32:58.430 read: IOPS=169, BW=21.1MiB/s (22.2MB/s)(212MiB/10008msec) 00:32:58.430 slat (nsec): min=6060, max=82592, avg=20933.74, stdev=5898.22 00:32:58.430 clat (usec): min=8362, max=22743, avg=17704.35, stdev=1435.16 00:32:58.430 lat (usec): min=8383, max=22768, avg=17725.29, stdev=1435.03 00:32:58.430 clat percentiles (usec): 00:32:58.430 | 1.00th=[14353], 5.00th=[15270], 10.00th=[15795], 20.00th=[16450], 00:32:58.430 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[18220], 00:32:58.430 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:32:58.430 | 99.00th=[20317], 99.50th=[20579], 99.90th=[22676], 99.95th=[22676], 00:32:58.430 | 99.99th=[22676] 00:32:58.430 bw ( KiB/s): min=20224, max=22784, per=31.71%, avg=21636.53, stdev=632.62, samples=19 00:32:58.430 iops : min= 158, max= 178, avg=169.00, stdev= 5.00, samples=19 00:32:58.430 lat (msec) : 10=0.06%, 20=96.16%, 50=3.78% 00:32:58.430 cpu : usr=92.73%, sys=5.76%, ctx=50, majf=0, minf=1637 00:32:58.430 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 issued rwts: total=1693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.430 filename0: (groupid=0, jobs=1): err= 0: pid=93465: Thu Apr 18 11:24:05 2024 00:32:58.430 read: IOPS=151, BW=19.0MiB/s (19.9MB/s)(190MiB/10007msec) 00:32:58.430 slat (nsec): min=6147, max=51653, avg=14970.28, stdev=6495.39 00:32:58.430 clat (usec): min=17508, max=22538, avg=19698.82, stdev=780.87 00:32:58.430 lat (usec): min=17518, max=22560, avg=19713.79, stdev=780.88 00:32:58.430 clat percentiles (usec): 00:32:58.430 | 1.00th=[18220], 5.00th=[18482], 10.00th=[18744], 20.00th=[19006], 00:32:58.430 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:32:58.430 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20841], 95.00th=[21103], 00:32:58.430 | 99.00th=[21627], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:32:58.430 | 99.99th=[22414] 00:32:58.430 bw ( KiB/s): min=19200, max=19968, per=28.49%, avg=19442.53, stdev=366.77, samples=19 00:32:58.430 iops : min= 150, max= 156, avg=151.89, stdev= 2.87, samples=19 00:32:58.430 lat (msec) : 20=65.55%, 50=34.45% 00:32:58.430 cpu : usr=93.23%, sys=5.42%, ctx=13, majf=0, minf=1635 00:32:58.430 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.430 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.430 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.430 00:32:58.430 Run status group 0 (all jobs): 00:32:58.430 READ: bw=66.6MiB/s (69.9MB/s), 19.0MiB/s-26.5MiB/s (19.9MB/s-27.8MB/s), io=667MiB (699MB), run=10007-10010msec 00:32:58.430 ----------------------------------------------------- 00:32:58.430 Suppressions used: 00:32:58.430 count bytes template 00:32:58.430 5 44 /usr/src/fio/parse.c 00:32:58.430 1 8 libtcmalloc_minimal.so 00:32:58.430 1 904 libcrypto.so 00:32:58.430 ----------------------------------------------------- 00:32:58.430 00:32:58.430 11:24:06 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:58.430 11:24:06 -- target/dif.sh@43 -- # local sub 00:32:58.430 11:24:06 -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.430 11:24:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:58.430 11:24:06 -- target/dif.sh@36 -- # local sub_id=0 00:32:58.430 11:24:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.430 11:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:58.430 11:24:06 -- common/autotest_common.sh@10 -- # set +x 00:32:58.430 11:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:58.430 11:24:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:58.430 11:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:58.430 11:24:06 -- common/autotest_common.sh@10 -- # set +x 00:32:58.430 ************************************ 00:32:58.430 END TEST fio_dif_digest 00:32:58.430 ************************************ 00:32:58.430 11:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:58.430 00:32:58.430 real 0m12.528s 00:32:58.430 user 0m29.876s 00:32:58.430 sys 0m2.147s 00:32:58.430 11:24:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:58.430 11:24:06 -- common/autotest_common.sh@10 -- # set +x 00:32:58.430 11:24:06 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:58.430 11:24:06 -- target/dif.sh@147 -- # nvmftestfini 00:32:58.430 11:24:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:58.430 11:24:06 -- nvmf/common.sh@117 -- # sync 00:32:58.430 11:24:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:58.430 11:24:06 -- nvmf/common.sh@120 -- # set +e 00:32:58.430 11:24:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:58.430 11:24:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:58.430 rmmod nvme_tcp 00:32:58.430 rmmod nvme_fabrics 00:32:58.430 rmmod nvme_keyring 00:32:58.688 11:24:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:58.688 11:24:06 -- nvmf/common.sh@124 -- # set -e 00:32:58.688 11:24:06 -- nvmf/common.sh@125 -- # return 0 00:32:58.688 11:24:06 -- nvmf/common.sh@478 -- # '[' -n 92654 ']' 00:32:58.688 11:24:06 -- nvmf/common.sh@479 -- # killprocess 92654 00:32:58.688 11:24:06 -- common/autotest_common.sh@936 -- # '[' -z 92654 ']' 00:32:58.688 11:24:06 -- common/autotest_common.sh@940 -- # kill -0 92654 00:32:58.688 11:24:06 -- common/autotest_common.sh@941 -- # uname 00:32:58.688 11:24:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:58.688 11:24:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92654 00:32:58.688 killing process with pid 92654 00:32:58.688 11:24:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:58.688 11:24:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:58.688 11:24:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92654' 00:32:58.688 11:24:06 -- common/autotest_common.sh@955 -- # kill 92654 00:32:58.688 11:24:06 -- common/autotest_common.sh@960 -- # wait 92654 00:33:00.064 11:24:07 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:00.064 11:24:07 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:00.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:00.064 Waiting for block devices as requested 00:33:00.323 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:00.323 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:00.323 11:24:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:00.323 11:24:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:00.323 11:24:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:00.323 11:24:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:00.323 11:24:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.323 11:24:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.323 11:24:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.323 11:24:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:00.323 00:33:00.323 real 1m10.142s 00:33:00.323 user 4m11.690s 00:33:00.323 sys 0m15.236s 00:33:00.323 11:24:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:00.323 11:24:08 -- common/autotest_common.sh@10 -- # set +x 00:33:00.323 ************************************ 00:33:00.323 END TEST nvmf_dif 00:33:00.323 ************************************ 00:33:00.323 11:24:08 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:00.323 11:24:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:00.323 11:24:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:00.323 11:24:08 -- common/autotest_common.sh@10 -- # set +x 00:33:00.583 ************************************ 00:33:00.583 START TEST nvmf_abort_qd_sizes 00:33:00.583 ************************************ 00:33:00.583 11:24:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:00.583 * Looking for test storage... 00:33:00.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:00.583 11:24:08 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:00.583 11:24:08 -- nvmf/common.sh@7 -- # uname -s 00:33:00.583 11:24:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.583 11:24:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.583 11:24:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.583 11:24:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.583 11:24:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.583 11:24:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.583 11:24:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.583 11:24:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.583 11:24:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.583 11:24:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:33:00.583 11:24:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:33:00.583 11:24:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.583 11:24:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.583 11:24:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:00.583 11:24:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.583 11:24:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:00.583 11:24:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.583 11:24:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.583 11:24:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.583 11:24:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.583 11:24:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.583 11:24:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.583 11:24:08 -- paths/export.sh@5 -- # export PATH 00:33:00.583 11:24:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.583 11:24:08 -- nvmf/common.sh@47 -- # : 0 00:33:00.583 11:24:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:00.583 11:24:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:00.583 11:24:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.583 11:24:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.583 11:24:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.583 11:24:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:00.583 11:24:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:00.583 11:24:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:00.583 11:24:08 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:00.583 11:24:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:00.583 11:24:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.583 11:24:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:00.583 11:24:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:00.583 11:24:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:00.583 11:24:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.583 11:24:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.583 11:24:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.583 11:24:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:33:00.583 11:24:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:33:00.583 11:24:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.583 11:24:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.583 11:24:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:00.583 11:24:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:00.583 11:24:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:00.583 11:24:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:00.583 11:24:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:00.583 11:24:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.583 11:24:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:00.583 11:24:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:00.583 11:24:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:00.583 11:24:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:00.583 11:24:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:00.583 11:24:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:00.583 Cannot find device "nvmf_tgt_br" 00:33:00.583 11:24:08 -- nvmf/common.sh@155 -- # true 00:33:00.583 11:24:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:00.583 Cannot find device "nvmf_tgt_br2" 00:33:00.583 11:24:08 -- nvmf/common.sh@156 -- # true 00:33:00.583 11:24:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:00.583 11:24:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:00.583 Cannot find device "nvmf_tgt_br" 00:33:00.583 11:24:08 -- nvmf/common.sh@158 -- # true 00:33:00.583 11:24:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:00.843 Cannot find device "nvmf_tgt_br2" 00:33:00.843 11:24:08 -- nvmf/common.sh@159 -- # true 00:33:00.843 11:24:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:00.843 11:24:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:00.843 11:24:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:00.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:00.843 11:24:08 -- nvmf/common.sh@162 -- # true 00:33:00.843 11:24:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:00.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:00.843 11:24:08 -- nvmf/common.sh@163 -- # true 00:33:00.843 11:24:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:00.843 11:24:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:00.843 11:24:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:00.843 11:24:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:00.843 11:24:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:00.843 11:24:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:00.843 11:24:08 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:00.843 11:24:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:00.843 11:24:08 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:00.843 11:24:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:00.843 11:24:08 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:00.843 11:24:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:00.843 11:24:08 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:00.843 11:24:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:00.843 11:24:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:00.843 11:24:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:00.843 11:24:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:00.843 11:24:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:00.843 11:24:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:00.843 11:24:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:00.843 11:24:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:00.843 11:24:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:00.843 11:24:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:01.107 11:24:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:01.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:33:01.107 00:33:01.107 --- 10.0.0.2 ping statistics --- 00:33:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.107 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:33:01.107 11:24:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:01.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:01.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:33:01.107 00:33:01.107 --- 10.0.0.3 ping statistics --- 00:33:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.107 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:01.107 11:24:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:01.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:33:01.107 00:33:01.107 --- 10.0.0.1 ping statistics --- 00:33:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.107 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:01.107 11:24:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.107 11:24:09 -- nvmf/common.sh@422 -- # return 0 00:33:01.107 11:24:09 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:33:01.107 11:24:09 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:01.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:01.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:01.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:01.931 11:24:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.931 11:24:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:01.931 11:24:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:01.931 11:24:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.931 11:24:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:01.931 11:24:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:01.931 11:24:09 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:01.931 11:24:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:01.931 11:24:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:01.931 11:24:09 -- common/autotest_common.sh@10 -- # set +x 00:33:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.931 11:24:09 -- nvmf/common.sh@470 -- # nvmfpid=94084 00:33:01.931 11:24:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:01.931 11:24:09 -- nvmf/common.sh@471 -- # waitforlisten 94084 00:33:01.931 11:24:09 -- common/autotest_common.sh@817 -- # '[' -z 94084 ']' 00:33:01.931 11:24:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.931 11:24:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:01.931 11:24:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.931 11:24:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:01.931 11:24:09 -- common/autotest_common.sh@10 -- # set +x 00:33:01.931 [2024-04-18 11:24:10.094145] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:01.931 [2024-04-18 11:24:10.095234] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.190 [2024-04-18 11:24:10.276200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.448 [2024-04-18 11:24:10.576349] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.448 [2024-04-18 11:24:10.576705] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.448 [2024-04-18 11:24:10.576939] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.448 [2024-04-18 11:24:10.577229] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.448 [2024-04-18 11:24:10.577270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.448 [2024-04-18 11:24:10.577479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.448 [2024-04-18 11:24:10.577878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.448 [2024-04-18 11:24:10.578021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.448 [2024-04-18 11:24:10.577894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.015 11:24:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:03.015 11:24:11 -- common/autotest_common.sh@850 -- # return 0 00:33:03.015 11:24:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:03.015 11:24:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:03.015 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 11:24:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:03.015 11:24:11 -- scripts/common.sh@309 -- # local bdf bdfs 00:33:03.015 11:24:11 -- scripts/common.sh@310 -- # local nvmes 00:33:03.015 11:24:11 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:33:03.015 11:24:11 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:33:03.015 11:24:11 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:33:03.015 11:24:11 -- scripts/common.sh@295 -- # local bdf= 00:33:03.015 11:24:11 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:33:03.015 11:24:11 -- scripts/common.sh@230 -- # local class 00:33:03.015 11:24:11 -- scripts/common.sh@231 -- # local subclass 00:33:03.015 11:24:11 -- scripts/common.sh@232 -- # local progif 00:33:03.015 11:24:11 -- scripts/common.sh@233 -- # printf %02x 1 00:33:03.015 11:24:11 -- scripts/common.sh@233 -- # class=01 00:33:03.015 11:24:11 -- scripts/common.sh@234 -- # printf %02x 8 00:33:03.015 11:24:11 -- scripts/common.sh@234 -- # subclass=08 00:33:03.015 11:24:11 -- scripts/common.sh@235 -- # printf %02x 2 00:33:03.015 11:24:11 -- scripts/common.sh@235 -- # progif=02 00:33:03.015 11:24:11 -- scripts/common.sh@237 -- # hash lspci 00:33:03.015 11:24:11 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:33:03.015 11:24:11 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:33:03.015 11:24:11 -- scripts/common.sh@240 -- # grep -i -- -p02 00:33:03.015 11:24:11 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:33:03.015 11:24:11 -- scripts/common.sh@242 -- # tr -d '"' 00:33:03.015 11:24:11 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:03.015 11:24:11 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:33:03.015 11:24:11 -- scripts/common.sh@15 -- # local i 00:33:03.015 11:24:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:33:03.015 11:24:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:03.015 11:24:11 -- scripts/common.sh@24 -- # return 0 00:33:03.015 11:24:11 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:33:03.015 11:24:11 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:03.015 11:24:11 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:33:03.015 11:24:11 -- scripts/common.sh@15 -- # local i 00:33:03.015 11:24:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:33:03.015 11:24:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:03.015 11:24:11 -- scripts/common.sh@24 -- # return 0 00:33:03.015 11:24:11 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:33:03.015 11:24:11 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:03.015 11:24:11 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:33:03.015 11:24:11 -- scripts/common.sh@320 -- # uname -s 00:33:03.015 11:24:11 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:03.015 11:24:11 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:03.015 11:24:11 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:03.015 11:24:11 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:33:03.015 11:24:11 -- scripts/common.sh@320 -- # uname -s 00:33:03.015 11:24:11 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:03.015 11:24:11 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:03.015 11:24:11 -- scripts/common.sh@325 -- # (( 2 )) 00:33:03.015 11:24:11 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:03.015 11:24:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:03.015 11:24:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:03.015 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 ************************************ 00:33:03.015 START TEST spdk_target_abort 00:33:03.015 ************************************ 00:33:03.015 11:24:11 -- common/autotest_common.sh@1111 -- # spdk_target 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:03.015 11:24:11 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:33:03.015 11:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:03.015 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 spdk_targetn1 00:33:03.274 11:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:03.274 11:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:03.274 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 [2024-04-18 11:24:11.281098] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.274 11:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:03.274 11:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:03.274 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 11:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:03.274 11:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:03.274 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 11:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:03.274 11:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:03.274 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 [2024-04-18 11:24:11.317324] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.274 11:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:03.274 11:24:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:06.559 Initializing NVMe Controllers 00:33:06.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:06.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:06.559 Initialization complete. Launching workers. 00:33:06.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8318, failed: 0 00:33:06.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1068, failed to submit 7250 00:33:06.559 success 700, unsuccess 368, failed 0 00:33:06.559 11:24:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:06.559 11:24:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:09.842 [2024-04-18 11:24:17.983174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:09.842 [2024-04-18 11:24:17.983256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:09.842 [2024-04-18 11:24:17.983273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:09.842 [2024-04-18 11:24:17.983287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:09.842 [2024-04-18 11:24:17.983300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:09.842 [2024-04-18 11:24:17.983313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:33:10.100 Initializing NVMe Controllers 00:33:10.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.100 Initialization complete. Launching workers. 00:33:10.100 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6030, failed: 0 00:33:10.100 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1300, failed to submit 4730 00:33:10.100 success 228, unsuccess 1072, failed 0 00:33:10.101 11:24:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:10.101 11:24:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:13.386 Initializing NVMe Controllers 00:33:13.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:13.386 Initialization complete. Launching workers. 00:33:13.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26715, failed: 0 00:33:13.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2546, failed to submit 24169 00:33:13.386 success 161, unsuccess 2385, failed 0 00:33:13.386 11:24:21 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:13.386 11:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.386 11:24:21 -- common/autotest_common.sh@10 -- # set +x 00:33:13.386 11:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.386 11:24:21 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:13.386 11:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.386 11:24:21 -- common/autotest_common.sh@10 -- # set +x 00:33:14.325 11:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:14.325 11:24:22 -- target/abort_qd_sizes.sh@61 -- # killprocess 94084 00:33:14.325 11:24:22 -- common/autotest_common.sh@936 -- # '[' -z 94084 ']' 00:33:14.325 11:24:22 -- common/autotest_common.sh@940 -- # kill -0 94084 00:33:14.325 11:24:22 -- common/autotest_common.sh@941 -- # uname 00:33:14.325 11:24:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.325 11:24:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94084 00:33:14.325 killing process with pid 94084 00:33:14.325 11:24:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:14.325 11:24:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:14.325 11:24:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94084' 00:33:14.325 11:24:22 -- common/autotest_common.sh@955 -- # kill 94084 00:33:14.325 11:24:22 -- common/autotest_common.sh@960 -- # wait 94084 00:33:15.259 ************************************ 00:33:15.259 END TEST spdk_target_abort 00:33:15.259 ************************************ 00:33:15.259 00:33:15.259 real 0m12.143s 00:33:15.259 user 0m48.051s 00:33:15.259 sys 0m1.894s 00:33:15.259 11:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:15.259 11:24:23 -- common/autotest_common.sh@10 -- # set +x 00:33:15.259 11:24:23 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:15.259 11:24:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:15.259 11:24:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:15.259 11:24:23 -- common/autotest_common.sh@10 -- # set +x 00:33:15.259 ************************************ 00:33:15.259 START TEST kernel_target_abort 00:33:15.259 ************************************ 00:33:15.259 11:24:23 -- common/autotest_common.sh@1111 -- # kernel_target 00:33:15.259 11:24:23 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:15.259 11:24:23 -- nvmf/common.sh@717 -- # local ip 00:33:15.259 11:24:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:33:15.259 11:24:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:33:15.259 11:24:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.260 11:24:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.260 11:24:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:33:15.260 11:24:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.260 11:24:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:33:15.260 11:24:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:33:15.260 11:24:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:33:15.260 11:24:23 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:15.260 11:24:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:15.260 11:24:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:33:15.260 11:24:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.260 11:24:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.260 11:24:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:15.260 11:24:23 -- nvmf/common.sh@628 -- # local block nvme 00:33:15.260 11:24:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:33:15.260 11:24:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:33:15.518 11:24:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:15.518 11:24:23 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:15.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:15.777 Waiting for block devices as requested 00:33:15.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.604 11:24:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:16.604 11:24:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:16.604 11:24:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:33:16.604 11:24:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:16.604 11:24:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:16.604 11:24:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:16.604 11:24:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:33:16.604 11:24:24 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:16.604 11:24:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:33:16.604 No valid GPT data, bailing 00:33:16.604 11:24:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:16.604 11:24:24 -- scripts/common.sh@391 -- # pt= 00:33:16.604 11:24:24 -- scripts/common.sh@392 -- # return 1 00:33:16.604 11:24:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:33:16.604 11:24:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:16.604 11:24:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:33:16.604 11:24:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:33:16.604 11:24:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:33:16.604 11:24:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:33:16.604 11:24:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:16.604 11:24:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:33:16.604 11:24:24 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:33:16.604 11:24:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:33:16.863 No valid GPT data, bailing 00:33:16.863 11:24:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:33:16.863 11:24:24 -- scripts/common.sh@391 -- # pt= 00:33:16.863 11:24:24 -- scripts/common.sh@392 -- # return 1 00:33:16.863 11:24:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:33:16.863 11:24:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:16.863 11:24:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:33:16.863 11:24:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:33:16.863 11:24:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:33:16.863 11:24:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:33:16.863 11:24:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:16.863 11:24:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:33:16.863 11:24:24 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:33:16.863 11:24:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:33:16.863 No valid GPT data, bailing 00:33:16.863 11:24:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:33:16.863 11:24:24 -- scripts/common.sh@391 -- # pt= 00:33:16.863 11:24:24 -- scripts/common.sh@392 -- # return 1 00:33:16.863 11:24:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:33:16.863 11:24:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:16.863 11:24:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:33:16.863 11:24:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:33:16.863 11:24:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:33:16.863 11:24:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:33:16.863 11:24:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:16.863 11:24:24 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:33:16.863 11:24:24 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:33:16.863 11:24:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:33:16.863 No valid GPT data, bailing 00:33:16.863 11:24:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:33:16.863 11:24:25 -- scripts/common.sh@391 -- # pt= 00:33:16.863 11:24:25 -- scripts/common.sh@392 -- # return 1 00:33:16.863 11:24:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:33:16.863 11:24:25 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:33:16.863 11:24:25 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:16.863 11:24:25 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:16.863 11:24:25 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:16.863 11:24:25 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:16.863 11:24:25 -- nvmf/common.sh@656 -- # echo 1 00:33:16.863 11:24:25 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:33:16.863 11:24:25 -- nvmf/common.sh@658 -- # echo 1 00:33:16.863 11:24:25 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:33:16.863 11:24:25 -- nvmf/common.sh@661 -- # echo tcp 00:33:16.863 11:24:25 -- nvmf/common.sh@662 -- # echo 4420 00:33:16.863 11:24:25 -- nvmf/common.sh@663 -- # echo ipv4 00:33:16.863 11:24:25 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:16.863 11:24:25 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 --hostid=27b29bba-0d0f-41a1-b963-3a8b51e36967 -a 10.0.0.1 -t tcp -s 4420 00:33:17.122 00:33:17.122 Discovery Log Number of Records 2, Generation counter 2 00:33:17.122 =====Discovery Log Entry 0====== 00:33:17.122 trtype: tcp 00:33:17.122 adrfam: ipv4 00:33:17.122 subtype: current discovery subsystem 00:33:17.122 treq: not specified, sq flow control disable supported 00:33:17.122 portid: 1 00:33:17.122 trsvcid: 4420 00:33:17.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:17.122 traddr: 10.0.0.1 00:33:17.122 eflags: none 00:33:17.122 sectype: none 00:33:17.122 =====Discovery Log Entry 1====== 00:33:17.122 trtype: tcp 00:33:17.122 adrfam: ipv4 00:33:17.122 subtype: nvme subsystem 00:33:17.122 treq: not specified, sq flow control disable supported 00:33:17.122 portid: 1 00:33:17.122 trsvcid: 4420 00:33:17.122 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:17.122 traddr: 10.0.0.1 00:33:17.122 eflags: none 00:33:17.122 sectype: none 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:17.122 11:24:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:20.409 Initializing NVMe Controllers 00:33:20.409 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:20.409 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:20.409 Initialization complete. Launching workers. 00:33:20.409 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 23232, failed: 0 00:33:20.409 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23232, failed to submit 0 00:33:20.409 success 0, unsuccess 23232, failed 0 00:33:20.409 11:24:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:20.409 11:24:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:23.748 Initializing NVMe Controllers 00:33:23.748 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:23.748 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:23.748 Initialization complete. Launching workers. 00:33:23.748 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53858, failed: 0 00:33:23.748 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23255, failed to submit 30603 00:33:23.748 success 0, unsuccess 23255, failed 0 00:33:23.748 11:24:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:23.748 11:24:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:27.077 Initializing NVMe Controllers 00:33:27.077 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:27.077 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:27.077 Initialization complete. Launching workers. 00:33:27.077 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62334, failed: 0 00:33:27.077 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15620, failed to submit 46714 00:33:27.077 success 0, unsuccess 15620, failed 0 00:33:27.077 11:24:34 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:27.077 11:24:34 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:27.077 11:24:34 -- nvmf/common.sh@675 -- # echo 0 00:33:27.077 11:24:34 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:27.077 11:24:34 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:27.077 11:24:34 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:27.077 11:24:34 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:27.077 11:24:34 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:33:27.077 11:24:34 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:33:27.077 11:24:34 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:27.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:29.049 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:29.049 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:29.049 00:33:29.049 real 0m13.642s 00:33:29.049 user 0m6.901s 00:33:29.049 sys 0m4.449s 00:33:29.049 11:24:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:29.049 11:24:37 -- common/autotest_common.sh@10 -- # set +x 00:33:29.049 ************************************ 00:33:29.049 END TEST kernel_target_abort 00:33:29.049 ************************************ 00:33:29.049 11:24:37 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:29.049 11:24:37 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:29.049 11:24:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:29.049 11:24:37 -- nvmf/common.sh@117 -- # sync 00:33:29.049 11:24:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:29.049 11:24:37 -- nvmf/common.sh@120 -- # set +e 00:33:29.049 11:24:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:29.049 11:24:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:29.049 rmmod nvme_tcp 00:33:29.049 rmmod nvme_fabrics 00:33:29.049 rmmod nvme_keyring 00:33:29.049 11:24:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:29.049 11:24:37 -- nvmf/common.sh@124 -- # set -e 00:33:29.049 11:24:37 -- nvmf/common.sh@125 -- # return 0 00:33:29.049 11:24:37 -- nvmf/common.sh@478 -- # '[' -n 94084 ']' 00:33:29.049 11:24:37 -- nvmf/common.sh@479 -- # killprocess 94084 00:33:29.049 11:24:37 -- common/autotest_common.sh@936 -- # '[' -z 94084 ']' 00:33:29.049 11:24:37 -- common/autotest_common.sh@940 -- # kill -0 94084 00:33:29.049 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (94084) - No such process 00:33:29.049 Process with pid 94084 is not found 00:33:29.049 11:24:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 94084 is not found' 00:33:29.049 11:24:37 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:29.049 11:24:37 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:29.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:29.566 Waiting for block devices as requested 00:33:29.566 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.566 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.566 11:24:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:29.566 11:24:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:29.566 11:24:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.566 11:24:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:29.566 11:24:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.566 11:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.566 11:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.566 11:24:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:29.566 00:33:29.566 real 0m29.141s 00:33:29.566 user 0m56.177s 00:33:29.566 sys 0m7.716s 00:33:29.566 11:24:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:29.566 11:24:37 -- common/autotest_common.sh@10 -- # set +x 00:33:29.566 ************************************ 00:33:29.566 END TEST nvmf_abort_qd_sizes 00:33:29.566 ************************************ 00:33:29.825 11:24:37 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:29.825 11:24:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:29.825 11:24:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:29.825 11:24:37 -- common/autotest_common.sh@10 -- # set +x 00:33:29.825 ************************************ 00:33:29.825 START TEST keyring_file 00:33:29.825 ************************************ 00:33:29.825 11:24:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:29.825 * Looking for test storage... 00:33:29.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:33:29.825 11:24:37 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:33:29.825 11:24:37 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:29.825 11:24:37 -- nvmf/common.sh@7 -- # uname -s 00:33:29.825 11:24:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.825 11:24:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.825 11:24:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.825 11:24:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.825 11:24:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.825 11:24:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.825 11:24:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.825 11:24:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.825 11:24:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.825 11:24:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.825 11:24:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:27b29bba-0d0f-41a1-b963-3a8b51e36967 00:33:29.825 11:24:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=27b29bba-0d0f-41a1-b963-3a8b51e36967 00:33:29.825 11:24:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.825 11:24:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.825 11:24:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:29.825 11:24:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.825 11:24:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:29.825 11:24:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.825 11:24:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.825 11:24:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.825 11:24:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.825 11:24:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.825 11:24:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.825 11:24:37 -- paths/export.sh@5 -- # export PATH 00:33:29.825 11:24:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.825 11:24:37 -- nvmf/common.sh@47 -- # : 0 00:33:29.825 11:24:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.825 11:24:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.825 11:24:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.825 11:24:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.825 11:24:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.825 11:24:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.825 11:24:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.825 11:24:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.825 11:24:37 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:29.825 11:24:37 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:29.825 11:24:37 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:29.825 11:24:37 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:29.825 11:24:37 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:29.825 11:24:37 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:29.825 11:24:37 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:29.825 11:24:37 -- keyring/common.sh@15 -- # local name key digest path 00:33:29.825 11:24:37 -- keyring/common.sh@17 -- # name=key0 00:33:29.825 11:24:38 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:29.825 11:24:38 -- keyring/common.sh@17 -- # digest=0 00:33:29.825 11:24:38 -- keyring/common.sh@18 -- # mktemp 00:33:29.825 11:24:38 -- keyring/common.sh@18 -- # path=/tmp/tmp.Wc4il6bJHf 00:33:29.825 11:24:38 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:29.825 11:24:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:29.825 11:24:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:29.825 11:24:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:29.825 11:24:38 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:29.825 11:24:38 -- nvmf/common.sh@693 -- # digest=0 00:33:29.825 11:24:38 -- nvmf/common.sh@694 -- # python - 00:33:30.084 11:24:38 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Wc4il6bJHf 00:33:30.084 11:24:38 -- keyring/common.sh@23 -- # echo /tmp/tmp.Wc4il6bJHf 00:33:30.084 11:24:38 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Wc4il6bJHf 00:33:30.084 11:24:38 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:30.084 11:24:38 -- keyring/common.sh@15 -- # local name key digest path 00:33:30.084 11:24:38 -- keyring/common.sh@17 -- # name=key1 00:33:30.084 11:24:38 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:30.084 11:24:38 -- keyring/common.sh@17 -- # digest=0 00:33:30.084 11:24:38 -- keyring/common.sh@18 -- # mktemp 00:33:30.084 11:24:38 -- keyring/common.sh@18 -- # path=/tmp/tmp.5r58yW7ayp 00:33:30.084 11:24:38 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:30.084 11:24:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:30.085 11:24:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:30.085 11:24:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:30.085 11:24:38 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:33:30.085 11:24:38 -- nvmf/common.sh@693 -- # digest=0 00:33:30.085 11:24:38 -- nvmf/common.sh@694 -- # python - 00:33:30.085 11:24:38 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5r58yW7ayp 00:33:30.085 11:24:38 -- keyring/common.sh@23 -- # echo /tmp/tmp.5r58yW7ayp 00:33:30.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.085 11:24:38 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5r58yW7ayp 00:33:30.085 11:24:38 -- keyring/file.sh@30 -- # tgtpid=95211 00:33:30.085 11:24:38 -- keyring/file.sh@32 -- # waitforlisten 95211 00:33:30.085 11:24:38 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:30.085 11:24:38 -- common/autotest_common.sh@817 -- # '[' -z 95211 ']' 00:33:30.085 11:24:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.085 11:24:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:30.085 11:24:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.085 11:24:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:30.085 11:24:38 -- common/autotest_common.sh@10 -- # set +x 00:33:30.085 [2024-04-18 11:24:38.271750] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:30.085 [2024-04-18 11:24:38.272174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95211 ] 00:33:30.345 [2024-04-18 11:24:38.451401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.604 [2024-04-18 11:24:38.740069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.539 11:24:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:31.539 11:24:39 -- common/autotest_common.sh@850 -- # return 0 00:33:31.539 11:24:39 -- keyring/file.sh@33 -- # rpc_cmd 00:33:31.539 11:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:31.539 11:24:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.539 [2024-04-18 11:24:39.543339] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.539 null0 00:33:31.539 [2024-04-18 11:24:39.575244] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:31.539 [2024-04-18 11:24:39.575664] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:31.539 [2024-04-18 11:24:39.583303] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:31.539 11:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:31.539 11:24:39 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.539 11:24:39 -- common/autotest_common.sh@638 -- # local es=0 00:33:31.539 11:24:39 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.539 11:24:39 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:33:31.539 11:24:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.539 11:24:39 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:33:31.539 11:24:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.539 11:24:39 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.539 11:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:31.539 11:24:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.539 [2024-04-18 11:24:39.595288] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:33:31.539 2024/04/18 11:24:39 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:33:31.539 { 00:33:31.539 "method": "nvmf_subsystem_add_listener", 00:33:31.539 "params": { 00:33:31.539 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.539 "secure_channel": false, 00:33:31.539 "listen_address": { 00:33:31.539 "trtype": "tcp", 00:33:31.539 "traddr": "127.0.0.1", 00:33:31.539 "trsvcid": "4420" 00:33:31.539 } 00:33:31.539 } 00:33:31.539 } 00:33:31.539 Got JSON-RPC error response 00:33:31.539 GoRPCClient: error on JSON-RPC call 00:33:31.539 11:24:39 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:33:31.539 11:24:39 -- common/autotest_common.sh@641 -- # es=1 00:33:31.539 11:24:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:31.539 11:24:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:31.539 11:24:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:31.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:31.539 11:24:39 -- keyring/file.sh@46 -- # bperfpid=95246 00:33:31.539 11:24:39 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:31.539 11:24:39 -- keyring/file.sh@48 -- # waitforlisten 95246 /var/tmp/bperf.sock 00:33:31.539 11:24:39 -- common/autotest_common.sh@817 -- # '[' -z 95246 ']' 00:33:31.539 11:24:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:31.539 11:24:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:31.539 11:24:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:31.539 11:24:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:31.539 11:24:39 -- common/autotest_common.sh@10 -- # set +x 00:33:31.539 [2024-04-18 11:24:39.694038] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:31.539 [2024-04-18 11:24:39.694357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95246 ] 00:33:31.803 [2024-04-18 11:24:39.856981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.067 [2024-04-18 11:24:40.092195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.634 11:24:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:32.634 11:24:40 -- common/autotest_common.sh@850 -- # return 0 00:33:32.634 11:24:40 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:32.634 11:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:32.893 11:24:40 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5r58yW7ayp 00:33:32.893 11:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5r58yW7ayp 00:33:33.151 11:24:41 -- keyring/file.sh@51 -- # get_key key0 00:33:33.151 11:24:41 -- keyring/file.sh@51 -- # jq -r .path 00:33:33.151 11:24:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.151 11:24:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.151 11:24:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.410 11:24:41 -- keyring/file.sh@51 -- # [[ /tmp/tmp.Wc4il6bJHf == \/\t\m\p\/\t\m\p\.\W\c\4\i\l\6\b\J\H\f ]] 00:33:33.410 11:24:41 -- keyring/file.sh@52 -- # jq -r .path 00:33:33.410 11:24:41 -- keyring/file.sh@52 -- # get_key key1 00:33:33.410 11:24:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.410 11:24:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.410 11:24:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.668 11:24:41 -- keyring/file.sh@52 -- # [[ /tmp/tmp.5r58yW7ayp == \/\t\m\p\/\t\m\p\.\5\r\5\8\y\W\7\a\y\p ]] 00:33:33.668 11:24:41 -- keyring/file.sh@53 -- # get_refcnt key0 00:33:33.668 11:24:41 -- keyring/common.sh@12 -- # get_key key0 00:33:33.668 11:24:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.668 11:24:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.668 11:24:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.668 11:24:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.926 11:24:42 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:33.926 11:24:42 -- keyring/file.sh@54 -- # get_refcnt key1 00:33:33.926 11:24:42 -- keyring/common.sh@12 -- # get_key key1 00:33:33.926 11:24:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.926 11:24:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.926 11:24:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.926 11:24:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:34.184 11:24:42 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:34.185 11:24:42 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.185 11:24:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.443 [2024-04-18 11:24:42.598038] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:34.702 nvme0n1 00:33:34.702 11:24:42 -- keyring/file.sh@59 -- # get_refcnt key0 00:33:34.702 11:24:42 -- keyring/common.sh@12 -- # get_key key0 00:33:34.702 11:24:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.702 11:24:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.702 11:24:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.702 11:24:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:34.961 11:24:42 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:34.961 11:24:42 -- keyring/file.sh@60 -- # get_refcnt key1 00:33:34.961 11:24:42 -- keyring/common.sh@12 -- # get_key key1 00:33:34.961 11:24:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.961 11:24:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.961 11:24:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.961 11:24:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:35.219 11:24:43 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:35.219 11:24:43 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.219 Running I/O for 1 seconds... 00:33:36.604 00:33:36.604 Latency(us) 00:33:36.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.604 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:36.604 nvme0n1 : 1.01 7970.03 31.13 0.00 0.00 15977.79 6136.55 23950.43 00:33:36.604 =================================================================================================================== 00:33:36.604 Total : 7970.03 31.13 0.00 0.00 15977.79 6136.55 23950.43 00:33:36.604 0 00:33:36.604 11:24:44 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:36.604 11:24:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:36.604 11:24:44 -- keyring/file.sh@65 -- # get_refcnt key0 00:33:36.604 11:24:44 -- keyring/common.sh@12 -- # get_key key0 00:33:36.604 11:24:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.604 11:24:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.604 11:24:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.604 11:24:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.862 11:24:44 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:36.862 11:24:44 -- keyring/file.sh@66 -- # get_refcnt key1 00:33:36.862 11:24:44 -- keyring/common.sh@12 -- # get_key key1 00:33:36.862 11:24:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.862 11:24:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.862 11:24:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.862 11:24:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.120 11:24:45 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:37.120 11:24:45 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:37.120 11:24:45 -- common/autotest_common.sh@638 -- # local es=0 00:33:37.120 11:24:45 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:37.120 11:24:45 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:37.120 11:24:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:37.120 11:24:45 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:37.120 11:24:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:37.120 11:24:45 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:37.120 11:24:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:37.378 [2024-04-18 11:24:45.444831] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:37.378 [2024-04-18 11:24:45.444915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (107): Transport endpoint is not connected 00:33:37.378 [2024-04-18 11:24:45.445868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:33:37.378 [2024-04-18 11:24:45.446863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:37.378 [2024-04-18 11:24:45.446897] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:37.378 [2024-04-18 11:24:45.446913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:37.378 2024/04/18 11:24:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:33:37.378 request: 00:33:37.379 { 00:33:37.379 "method": "bdev_nvme_attach_controller", 00:33:37.379 "params": { 00:33:37.379 "name": "nvme0", 00:33:37.379 "trtype": "tcp", 00:33:37.379 "traddr": "127.0.0.1", 00:33:37.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.379 "adrfam": "ipv4", 00:33:37.379 "trsvcid": "4420", 00:33:37.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.379 "psk": "key1" 00:33:37.379 } 00:33:37.379 } 00:33:37.379 Got JSON-RPC error response 00:33:37.379 GoRPCClient: error on JSON-RPC call 00:33:37.379 11:24:45 -- common/autotest_common.sh@641 -- # es=1 00:33:37.379 11:24:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:37.379 11:24:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:37.379 11:24:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:37.379 11:24:45 -- keyring/file.sh@71 -- # get_refcnt key0 00:33:37.379 11:24:45 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.379 11:24:45 -- keyring/common.sh@12 -- # get_key key0 00:33:37.379 11:24:45 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.379 11:24:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.379 11:24:45 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.637 11:24:45 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:37.637 11:24:45 -- keyring/file.sh@72 -- # get_refcnt key1 00:33:37.637 11:24:45 -- keyring/common.sh@12 -- # get_key key1 00:33:37.637 11:24:45 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.637 11:24:45 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.637 11:24:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.637 11:24:45 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:37.895 11:24:45 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:37.895 11:24:45 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:37.895 11:24:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:38.152 11:24:46 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:38.152 11:24:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:38.410 11:24:46 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:38.410 11:24:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.410 11:24:46 -- keyring/file.sh@77 -- # jq length 00:33:38.724 11:24:46 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:38.724 11:24:46 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Wc4il6bJHf 00:33:38.724 11:24:46 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.724 11:24:46 -- common/autotest_common.sh@638 -- # local es=0 00:33:38.724 11:24:46 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.724 11:24:46 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:38.724 11:24:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:38.724 11:24:46 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:38.724 11:24:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:38.724 11:24:46 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.724 11:24:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.724 [2024-04-18 11:24:46.915950] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Wc4il6bJHf': 0100660 00:33:38.724 [2024-04-18 11:24:46.916012] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:38.724 2024/04/18 11:24:46 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Wc4il6bJHf], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:33:38.724 request: 00:33:38.724 { 00:33:38.724 "method": "keyring_file_add_key", 00:33:38.724 "params": { 00:33:38.724 "name": "key0", 00:33:38.724 "path": "/tmp/tmp.Wc4il6bJHf" 00:33:38.724 } 00:33:38.724 } 00:33:38.724 Got JSON-RPC error response 00:33:38.724 GoRPCClient: error on JSON-RPC call 00:33:38.983 11:24:46 -- common/autotest_common.sh@641 -- # es=1 00:33:38.983 11:24:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:38.983 11:24:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:38.983 11:24:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:38.983 11:24:46 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Wc4il6bJHf 00:33:38.983 11:24:46 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.983 11:24:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Wc4il6bJHf 00:33:38.983 11:24:47 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Wc4il6bJHf 00:33:38.983 11:24:47 -- keyring/file.sh@88 -- # get_refcnt key0 00:33:38.983 11:24:47 -- keyring/common.sh@12 -- # get_key key0 00:33:38.983 11:24:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.983 11:24:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.983 11:24:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.983 11:24:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.550 11:24:47 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:39.550 11:24:47 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.550 11:24:47 -- common/autotest_common.sh@638 -- # local es=0 00:33:39.550 11:24:47 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.550 11:24:47 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:39.550 11:24:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:39.550 11:24:47 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:39.550 11:24:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:39.550 11:24:47 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.550 11:24:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.550 [2024-04-18 11:24:47.712215] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Wc4il6bJHf': No such file or directory 00:33:39.550 [2024-04-18 11:24:47.712276] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:39.550 [2024-04-18 11:24:47.712311] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:39.550 [2024-04-18 11:24:47.712325] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:39.550 [2024-04-18 11:24:47.712341] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:39.550 2024/04/18 11:24:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:33:39.550 request: 00:33:39.550 { 00:33:39.550 "method": "bdev_nvme_attach_controller", 00:33:39.550 "params": { 00:33:39.550 "name": "nvme0", 00:33:39.550 "trtype": "tcp", 00:33:39.550 "traddr": "127.0.0.1", 00:33:39.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.550 "adrfam": "ipv4", 00:33:39.550 "trsvcid": "4420", 00:33:39.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.550 "psk": "key0" 00:33:39.550 } 00:33:39.550 } 00:33:39.550 Got JSON-RPC error response 00:33:39.550 GoRPCClient: error on JSON-RPC call 00:33:39.550 11:24:47 -- common/autotest_common.sh@641 -- # es=1 00:33:39.550 11:24:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:39.550 11:24:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:39.550 11:24:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:39.550 11:24:47 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:39.550 11:24:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:40.116 11:24:48 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:40.116 11:24:48 -- keyring/common.sh@15 -- # local name key digest path 00:33:40.116 11:24:48 -- keyring/common.sh@17 -- # name=key0 00:33:40.116 11:24:48 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:40.116 11:24:48 -- keyring/common.sh@17 -- # digest=0 00:33:40.116 11:24:48 -- keyring/common.sh@18 -- # mktemp 00:33:40.116 11:24:48 -- keyring/common.sh@18 -- # path=/tmp/tmp.HhKy3cVj6N 00:33:40.116 11:24:48 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:40.116 11:24:48 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:40.116 11:24:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:40.116 11:24:48 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:40.116 11:24:48 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:40.116 11:24:48 -- nvmf/common.sh@693 -- # digest=0 00:33:40.116 11:24:48 -- nvmf/common.sh@694 -- # python - 00:33:40.116 11:24:48 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HhKy3cVj6N 00:33:40.116 11:24:48 -- keyring/common.sh@23 -- # echo /tmp/tmp.HhKy3cVj6N 00:33:40.116 11:24:48 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HhKy3cVj6N 00:33:40.116 11:24:48 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HhKy3cVj6N 00:33:40.116 11:24:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HhKy3cVj6N 00:33:40.374 11:24:48 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.374 11:24:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.631 nvme0n1 00:33:40.632 11:24:48 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:40.632 11:24:48 -- keyring/common.sh@12 -- # get_key key0 00:33:40.632 11:24:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.632 11:24:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.632 11:24:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.632 11:24:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.889 11:24:48 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:40.889 11:24:48 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:40.889 11:24:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:41.148 11:24:49 -- keyring/file.sh@101 -- # get_key key0 00:33:41.148 11:24:49 -- keyring/file.sh@101 -- # jq -r .removed 00:33:41.148 11:24:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.148 11:24:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.148 11:24:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.406 11:24:49 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:41.406 11:24:49 -- keyring/file.sh@102 -- # get_refcnt key0 00:33:41.406 11:24:49 -- keyring/common.sh@12 -- # get_key key0 00:33:41.406 11:24:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.406 11:24:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.406 11:24:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.406 11:24:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.664 11:24:49 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:41.664 11:24:49 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:41.664 11:24:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:41.922 11:24:50 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:41.922 11:24:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.922 11:24:50 -- keyring/file.sh@104 -- # jq length 00:33:42.180 11:24:50 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:42.180 11:24:50 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HhKy3cVj6N 00:33:42.180 11:24:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HhKy3cVj6N 00:33:42.438 11:24:50 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5r58yW7ayp 00:33:42.438 11:24:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5r58yW7ayp 00:33:42.697 11:24:50 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.697 11:24:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.955 nvme0n1 00:33:42.955 11:24:51 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:42.955 11:24:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:43.213 11:24:51 -- keyring/file.sh@112 -- # config='{ 00:33:43.213 "subsystems": [ 00:33:43.213 { 00:33:43.213 "subsystem": "keyring", 00:33:43.213 "config": [ 00:33:43.213 { 00:33:43.213 "method": "keyring_file_add_key", 00:33:43.213 "params": { 00:33:43.213 "name": "key0", 00:33:43.213 "path": "/tmp/tmp.HhKy3cVj6N" 00:33:43.213 } 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "method": "keyring_file_add_key", 00:33:43.213 "params": { 00:33:43.213 "name": "key1", 00:33:43.213 "path": "/tmp/tmp.5r58yW7ayp" 00:33:43.213 } 00:33:43.213 } 00:33:43.213 ] 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "subsystem": "iobuf", 00:33:43.213 "config": [ 00:33:43.213 { 00:33:43.213 "method": "iobuf_set_options", 00:33:43.213 "params": { 00:33:43.213 "large_bufsize": 135168, 00:33:43.213 "large_pool_count": 1024, 00:33:43.213 "small_bufsize": 8192, 00:33:43.213 "small_pool_count": 8192 00:33:43.213 } 00:33:43.213 } 00:33:43.213 ] 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "subsystem": "sock", 00:33:43.213 "config": [ 00:33:43.213 { 00:33:43.213 "method": "sock_impl_set_options", 00:33:43.213 "params": { 00:33:43.213 "enable_ktls": false, 00:33:43.213 "enable_placement_id": 0, 00:33:43.213 "enable_quickack": false, 00:33:43.213 "enable_recv_pipe": true, 00:33:43.213 "enable_zerocopy_send_client": false, 00:33:43.213 "enable_zerocopy_send_server": true, 00:33:43.213 "impl_name": "posix", 00:33:43.213 "recv_buf_size": 2097152, 00:33:43.213 "send_buf_size": 2097152, 00:33:43.213 "tls_version": 0, 00:33:43.213 "zerocopy_threshold": 0 00:33:43.213 } 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "method": "sock_impl_set_options", 00:33:43.213 "params": { 00:33:43.213 "enable_ktls": false, 00:33:43.213 "enable_placement_id": 0, 00:33:43.213 "enable_quickack": false, 00:33:43.213 "enable_recv_pipe": true, 00:33:43.213 "enable_zerocopy_send_client": false, 00:33:43.213 "enable_zerocopy_send_server": true, 00:33:43.213 "impl_name": "ssl", 00:33:43.213 "recv_buf_size": 4096, 00:33:43.213 "send_buf_size": 4096, 00:33:43.213 "tls_version": 0, 00:33:43.213 "zerocopy_threshold": 0 00:33:43.213 } 00:33:43.213 } 00:33:43.213 ] 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "subsystem": "vmd", 00:33:43.213 "config": [] 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "subsystem": "accel", 00:33:43.213 "config": [ 00:33:43.213 { 00:33:43.213 "method": "accel_set_options", 00:33:43.213 "params": { 00:33:43.213 "buf_count": 2048, 00:33:43.213 "large_cache_size": 16, 00:33:43.213 "sequence_count": 2048, 00:33:43.213 "small_cache_size": 128, 00:33:43.213 "task_count": 2048 00:33:43.213 } 00:33:43.213 } 00:33:43.213 ] 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "subsystem": "bdev", 00:33:43.213 "config": [ 00:33:43.213 { 00:33:43.213 "method": "bdev_set_options", 00:33:43.213 "params": { 00:33:43.213 "bdev_auto_examine": true, 00:33:43.213 "bdev_io_cache_size": 256, 00:33:43.213 "bdev_io_pool_size": 65535, 00:33:43.213 "iobuf_large_cache_size": 16, 00:33:43.213 "iobuf_small_cache_size": 128 00:33:43.213 } 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "method": "bdev_raid_set_options", 00:33:43.213 "params": { 00:33:43.213 "process_window_size_kb": 1024 00:33:43.213 } 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "method": "bdev_iscsi_set_options", 00:33:43.213 "params": { 00:33:43.213 "timeout_sec": 30 00:33:43.213 } 00:33:43.213 }, 00:33:43.213 { 00:33:43.213 "method": "bdev_nvme_set_options", 00:33:43.213 "params": { 00:33:43.213 "action_on_timeout": "none", 00:33:43.213 "allow_accel_sequence": false, 00:33:43.213 "arbitration_burst": 0, 00:33:43.213 "bdev_retry_count": 3, 00:33:43.213 "ctrlr_loss_timeout_sec": 0, 00:33:43.213 "delay_cmd_submit": true, 00:33:43.214 "dhchap_dhgroups": [ 00:33:43.214 "null", 00:33:43.214 "ffdhe2048", 00:33:43.214 "ffdhe3072", 00:33:43.214 "ffdhe4096", 00:33:43.214 "ffdhe6144", 00:33:43.214 "ffdhe8192" 00:33:43.214 ], 00:33:43.214 "dhchap_digests": [ 00:33:43.214 "sha256", 00:33:43.214 "sha384", 00:33:43.214 "sha512" 00:33:43.214 ], 00:33:43.214 "disable_auto_failback": false, 00:33:43.214 "fast_io_fail_timeout_sec": 0, 00:33:43.214 "generate_uuids": false, 00:33:43.214 "high_priority_weight": 0, 00:33:43.214 "io_path_stat": false, 00:33:43.214 "io_queue_requests": 512, 00:33:43.214 "keep_alive_timeout_ms": 10000, 00:33:43.214 "low_priority_weight": 0, 00:33:43.214 "medium_priority_weight": 0, 00:33:43.214 "nvme_adminq_poll_period_us": 10000, 00:33:43.214 "nvme_error_stat": false, 00:33:43.214 "nvme_ioq_poll_period_us": 0, 00:33:43.214 "rdma_cm_event_timeout_ms": 0, 00:33:43.214 "rdma_max_cq_size": 0, 00:33:43.214 "rdma_srq_size": 0, 00:33:43.214 "reconnect_delay_sec": 0, 00:33:43.214 "timeout_admin_us": 0, 00:33:43.214 "timeout_us": 0, 00:33:43.214 "transport_ack_timeout": 0, 00:33:43.214 "transport_retry_count": 4, 00:33:43.214 "transport_tos": 0 00:33:43.214 } 00:33:43.214 }, 00:33:43.214 { 00:33:43.214 "method": "bdev_nvme_attach_controller", 00:33:43.214 "params": { 00:33:43.214 "adrfam": "IPv4", 00:33:43.214 "ctrlr_loss_timeout_sec": 0, 00:33:43.214 "ddgst": false, 00:33:43.214 "fast_io_fail_timeout_sec": 0, 00:33:43.214 "hdgst": false, 00:33:43.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.214 "name": "nvme0", 00:33:43.214 "prchk_guard": false, 00:33:43.214 "prchk_reftag": false, 00:33:43.214 "psk": "key0", 00:33:43.214 "reconnect_delay_sec": 0, 00:33:43.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.214 "traddr": "127.0.0.1", 00:33:43.214 "trsvcid": "4420", 00:33:43.214 "trtype": "TCP" 00:33:43.214 } 00:33:43.214 }, 00:33:43.214 { 00:33:43.214 "method": "bdev_nvme_set_hotplug", 00:33:43.214 "params": { 00:33:43.214 "enable": false, 00:33:43.214 "period_us": 100000 00:33:43.214 } 00:33:43.214 }, 00:33:43.214 { 00:33:43.214 "method": "bdev_wait_for_examine" 00:33:43.214 } 00:33:43.214 ] 00:33:43.214 }, 00:33:43.214 { 00:33:43.214 "subsystem": "nbd", 00:33:43.214 "config": [] 00:33:43.214 } 00:33:43.214 ] 00:33:43.214 }' 00:33:43.214 11:24:51 -- keyring/file.sh@114 -- # killprocess 95246 00:33:43.214 11:24:51 -- common/autotest_common.sh@936 -- # '[' -z 95246 ']' 00:33:43.214 11:24:51 -- common/autotest_common.sh@940 -- # kill -0 95246 00:33:43.214 11:24:51 -- common/autotest_common.sh@941 -- # uname 00:33:43.214 11:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:43.214 11:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95246 00:33:43.214 11:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:43.214 11:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:43.214 killing process with pid 95246 00:33:43.214 11:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95246' 00:33:43.214 Received shutdown signal, test time was about 1.000000 seconds 00:33:43.214 00:33:43.214 Latency(us) 00:33:43.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.214 =================================================================================================================== 00:33:43.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.214 11:24:51 -- common/autotest_common.sh@955 -- # kill 95246 00:33:43.214 11:24:51 -- common/autotest_common.sh@960 -- # wait 95246 00:33:44.147 11:24:52 -- keyring/file.sh@117 -- # bperfpid=95729 00:33:44.147 11:24:52 -- keyring/file.sh@119 -- # waitforlisten 95729 /var/tmp/bperf.sock 00:33:44.147 11:24:52 -- common/autotest_common.sh@817 -- # '[' -z 95729 ']' 00:33:44.147 11:24:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.147 11:24:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:44.147 11:24:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.147 11:24:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:44.147 11:24:52 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:44.147 11:24:52 -- common/autotest_common.sh@10 -- # set +x 00:33:44.147 11:24:52 -- keyring/file.sh@115 -- # echo '{ 00:33:44.147 "subsystems": [ 00:33:44.147 { 00:33:44.147 "subsystem": "keyring", 00:33:44.147 "config": [ 00:33:44.147 { 00:33:44.147 "method": "keyring_file_add_key", 00:33:44.147 "params": { 00:33:44.147 "name": "key0", 00:33:44.147 "path": "/tmp/tmp.HhKy3cVj6N" 00:33:44.147 } 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "method": "keyring_file_add_key", 00:33:44.147 "params": { 00:33:44.147 "name": "key1", 00:33:44.147 "path": "/tmp/tmp.5r58yW7ayp" 00:33:44.147 } 00:33:44.147 } 00:33:44.147 ] 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "subsystem": "iobuf", 00:33:44.147 "config": [ 00:33:44.147 { 00:33:44.147 "method": "iobuf_set_options", 00:33:44.147 "params": { 00:33:44.147 "large_bufsize": 135168, 00:33:44.147 "large_pool_count": 1024, 00:33:44.147 "small_bufsize": 8192, 00:33:44.147 "small_pool_count": 8192 00:33:44.147 } 00:33:44.147 } 00:33:44.147 ] 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "subsystem": "sock", 00:33:44.147 "config": [ 00:33:44.147 { 00:33:44.147 "method": "sock_impl_set_options", 00:33:44.147 "params": { 00:33:44.147 "enable_ktls": false, 00:33:44.147 "enable_placement_id": 0, 00:33:44.147 "enable_quickack": false, 00:33:44.147 "enable_recv_pipe": true, 00:33:44.147 "enable_zerocopy_send_client": false, 00:33:44.147 "enable_zerocopy_send_server": true, 00:33:44.147 "impl_name": "posix", 00:33:44.147 "recv_buf_size": 2097152, 00:33:44.147 "send_buf_size": 2097152, 00:33:44.147 "tls_version": 0, 00:33:44.147 "zerocopy_threshold": 0 00:33:44.147 } 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "method": "sock_impl_set_options", 00:33:44.147 "params": { 00:33:44.147 "enable_ktls": false, 00:33:44.147 "enable_placement_id": 0, 00:33:44.147 "enable_quickack": false, 00:33:44.147 "enable_recv_pipe": true, 00:33:44.147 "enable_zerocopy_send_client": false, 00:33:44.147 "enable_zerocopy_send_server": true, 00:33:44.147 "impl_name": "ssl", 00:33:44.147 "recv_buf_size": 4096, 00:33:44.147 "send_buf_size": 4096, 00:33:44.147 "tls_version": 0, 00:33:44.147 "zerocopy_threshold": 0 00:33:44.147 } 00:33:44.147 } 00:33:44.147 ] 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "subsystem": "vmd", 00:33:44.147 "config": [] 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "subsystem": "accel", 00:33:44.147 "config": [ 00:33:44.147 { 00:33:44.147 "method": "accel_set_options", 00:33:44.147 "params": { 00:33:44.147 "buf_count": 2048, 00:33:44.147 "large_cache_size": 16, 00:33:44.147 "sequence_count": 2048, 00:33:44.147 "small_cache_size": 128, 00:33:44.147 "task_count": 2048 00:33:44.147 } 00:33:44.147 } 00:33:44.147 ] 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "subsystem": "bdev", 00:33:44.147 "config": [ 00:33:44.147 { 00:33:44.147 "method": "bdev_set_options", 00:33:44.147 "params": { 00:33:44.147 "bdev_auto_examine": true, 00:33:44.147 "bdev_io_cache_size": 256, 00:33:44.147 "bdev_io_pool_size": 65535, 00:33:44.147 "iobuf_large_cache_size": 16, 00:33:44.147 "iobuf_small_cache_size": 128 00:33:44.147 } 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "method": "bdev_raid_set_options", 00:33:44.147 "params": { 00:33:44.147 "process_window_size_kb": 1024 00:33:44.147 } 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "method": "bdev_iscsi_set_options", 00:33:44.147 "params": { 00:33:44.147 "timeout_sec": 30 00:33:44.147 } 00:33:44.147 }, 00:33:44.147 { 00:33:44.147 "method": "bdev_nvme_set_options", 00:33:44.147 "params": { 00:33:44.147 "action_on_timeout": "none", 00:33:44.147 "allow_accel_sequence": false, 00:33:44.147 "arbitration_burst": 0, 00:33:44.147 "bdev_retry_count": 3, 00:33:44.147 "ctrlr_loss_timeout_sec": 0, 00:33:44.147 "delay_cmd_submit": true, 00:33:44.147 "dhchap_dhgroups": [ 00:33:44.147 "null", 00:33:44.147 "ffdhe2048", 00:33:44.147 "ffdhe3072", 00:33:44.147 "ffdhe4096", 00:33:44.147 "ffdhe6144", 00:33:44.147 "ffdhe8192" 00:33:44.147 ], 00:33:44.147 "dhchap_digests": [ 00:33:44.147 "sha256", 00:33:44.147 "sha384", 00:33:44.147 "sha512" 00:33:44.147 ], 00:33:44.147 "disable_auto_failback": false, 00:33:44.147 "fast_io_fail_timeout_sec": 0, 00:33:44.147 "generate_uuids": false, 00:33:44.147 "high_priority_weight": 0, 00:33:44.147 "io_path_stat": false, 00:33:44.147 "io_queue_requests": 512, 00:33:44.147 "keep_alive_timeout_ms": 10000, 00:33:44.147 "low_priority_weight": 0, 00:33:44.147 "medium_priority_weight": 0, 00:33:44.148 "nvme_adminq_poll_period_us": 10000, 00:33:44.148 "nvme_error_stat": false, 00:33:44.148 "nvme_ioq_poll_period_us": 0, 00:33:44.148 "rdma_cm_event_timeout_ms": 0, 00:33:44.148 "rdma_max_cq_size": 0, 00:33:44.148 "rdma_srq_size": 0, 00:33:44.148 "reconnect_delay_sec": 0, 00:33:44.148 "timeout_admin_us": 0, 00:33:44.148 "timeout_us": 0, 00:33:44.148 "transport_ack_timeout": 0, 00:33:44.148 "transport_retry_count": 4, 00:33:44.148 "transport_tos": 0 00:33:44.148 } 00:33:44.148 }, 00:33:44.148 { 00:33:44.148 "method": "bdev_nvme_attach_controller", 00:33:44.148 "params": { 00:33:44.148 "adrfam": "IPv4", 00:33:44.148 "ctrlr_loss_timeout_sec": 0, 00:33:44.148 "ddgst": false, 00:33:44.148 "fast_io_fail_timeout_sec": 0, 00:33:44.148 "hdgst": false, 00:33:44.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.148 "name": "nvme0", 00:33:44.148 "prchk_guard": false, 00:33:44.148 "prchk_reftag": false, 00:33:44.148 "psk": "key0", 00:33:44.148 "reconnect_delay_sec": 0, 00:33:44.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.148 "traddr": "127.0.0.1", 00:33:44.148 "trsvcid": "4420", 00:33:44.148 "trtype": "TCP" 00:33:44.148 } 00:33:44.148 }, 00:33:44.148 { 00:33:44.148 "method": "bdev_nvme_set_hotplug", 00:33:44.148 "params": { 00:33:44.148 "enable": false, 00:33:44.148 "period_us": 100000 00:33:44.148 } 00:33:44.148 }, 00:33:44.148 { 00:33:44.148 "method": "bdev_wait_for_examine" 00:33:44.148 } 00:33:44.148 ] 00:33:44.148 }, 00:33:44.148 { 00:33:44.148 "subsystem": "nbd", 00:33:44.148 "config": [] 00:33:44.148 } 00:33:44.148 ] 00:33:44.148 }' 00:33:44.404 [2024-04-18 11:24:52.438086] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:44.404 [2024-04-18 11:24:52.438319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95729 ] 00:33:44.404 [2024-04-18 11:24:52.609887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.661 [2024-04-18 11:24:52.837275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.225 [2024-04-18 11:24:53.230595] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:45.225 11:24:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:45.225 11:24:53 -- common/autotest_common.sh@850 -- # return 0 00:33:45.225 11:24:53 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:45.225 11:24:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.225 11:24:53 -- keyring/file.sh@120 -- # jq length 00:33:45.483 11:24:53 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:45.483 11:24:53 -- keyring/file.sh@121 -- # get_refcnt key0 00:33:45.483 11:24:53 -- keyring/common.sh@12 -- # get_key key0 00:33:45.483 11:24:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.483 11:24:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.483 11:24:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.483 11:24:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.741 11:24:53 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:45.741 11:24:53 -- keyring/file.sh@122 -- # get_refcnt key1 00:33:45.741 11:24:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.741 11:24:53 -- keyring/common.sh@12 -- # get_key key1 00:33:45.741 11:24:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.741 11:24:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.741 11:24:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:45.998 11:24:54 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:45.998 11:24:54 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:45.998 11:24:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:45.998 11:24:54 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:46.256 11:24:54 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:46.256 11:24:54 -- keyring/file.sh@1 -- # cleanup 00:33:46.256 11:24:54 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HhKy3cVj6N /tmp/tmp.5r58yW7ayp 00:33:46.256 11:24:54 -- keyring/file.sh@20 -- # killprocess 95729 00:33:46.256 11:24:54 -- common/autotest_common.sh@936 -- # '[' -z 95729 ']' 00:33:46.256 11:24:54 -- common/autotest_common.sh@940 -- # kill -0 95729 00:33:46.256 11:24:54 -- common/autotest_common.sh@941 -- # uname 00:33:46.256 11:24:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:46.256 11:24:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95729 00:33:46.256 killing process with pid 95729 00:33:46.256 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.256 00:33:46.256 Latency(us) 00:33:46.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.256 =================================================================================================================== 00:33:46.256 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:46.256 11:24:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:46.256 11:24:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:46.256 11:24:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95729' 00:33:46.256 11:24:54 -- common/autotest_common.sh@955 -- # kill 95729 00:33:46.256 11:24:54 -- common/autotest_common.sh@960 -- # wait 95729 00:33:47.628 11:24:55 -- keyring/file.sh@21 -- # killprocess 95211 00:33:47.628 11:24:55 -- common/autotest_common.sh@936 -- # '[' -z 95211 ']' 00:33:47.628 11:24:55 -- common/autotest_common.sh@940 -- # kill -0 95211 00:33:47.628 11:24:55 -- common/autotest_common.sh@941 -- # uname 00:33:47.628 11:24:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:47.628 11:24:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95211 00:33:47.628 killing process with pid 95211 00:33:47.628 11:24:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:47.628 11:24:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:47.628 11:24:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95211' 00:33:47.628 11:24:55 -- common/autotest_common.sh@955 -- # kill 95211 00:33:47.628 [2024-04-18 11:24:55.543887] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:47.628 11:24:55 -- common/autotest_common.sh@960 -- # wait 95211 00:33:49.527 00:33:49.527 real 0m19.825s 00:33:49.527 user 0m44.973s 00:33:49.527 sys 0m3.655s 00:33:49.527 11:24:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:49.527 11:24:57 -- common/autotest_common.sh@10 -- # set +x 00:33:49.527 ************************************ 00:33:49.527 END TEST keyring_file 00:33:49.527 ************************************ 00:33:49.527 11:24:57 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:49.527 11:24:57 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:49.527 11:24:57 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:49.527 11:24:57 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:49.783 11:24:57 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:49.783 11:24:57 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:49.783 11:24:57 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:49.783 11:24:57 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:49.783 11:24:57 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:49.783 11:24:57 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:49.783 11:24:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:49.783 11:24:57 -- common/autotest_common.sh@10 -- # set +x 00:33:49.783 11:24:57 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:49.783 11:24:57 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:49.783 11:24:57 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:49.783 11:24:57 -- common/autotest_common.sh@10 -- # set +x 00:33:51.694 INFO: APP EXITING 00:33:51.694 INFO: killing all VMs 00:33:51.694 INFO: killing vhost app 00:33:51.694 INFO: EXIT DONE 00:33:51.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:51.964 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:51.964 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:52.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:52.788 Cleaning 00:33:52.788 Removing: /var/run/dpdk/spdk0/config 00:33:52.788 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:52.788 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:52.788 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:52.788 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:52.788 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:52.788 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:52.788 Removing: /var/run/dpdk/spdk1/config 00:33:52.788 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:52.788 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:52.788 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:52.788 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:52.788 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:52.788 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:52.788 Removing: /var/run/dpdk/spdk2/config 00:33:52.788 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:52.788 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:52.788 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:52.788 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:52.788 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:52.788 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:52.788 Removing: /var/run/dpdk/spdk3/config 00:33:52.788 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:52.788 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:52.788 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:52.788 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:52.788 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:52.788 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:52.788 Removing: /var/run/dpdk/spdk4/config 00:33:52.788 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:52.788 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:52.788 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:52.788 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:52.788 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:52.788 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:52.788 Removing: /dev/shm/nvmf_trace.0 00:33:52.788 Removing: /dev/shm/spdk_tgt_trace.pid60249 00:33:52.788 Removing: /var/run/dpdk/spdk0 00:33:52.788 Removing: /var/run/dpdk/spdk1 00:33:52.788 Removing: /var/run/dpdk/spdk2 00:33:52.788 Removing: /var/run/dpdk/spdk3 00:33:52.788 Removing: /var/run/dpdk/spdk4 00:33:52.788 Removing: /var/run/dpdk/spdk_pid60004 00:33:52.788 Removing: /var/run/dpdk/spdk_pid60249 00:33:52.789 Removing: /var/run/dpdk/spdk_pid60564 00:33:52.789 Removing: /var/run/dpdk/spdk_pid60689 00:33:52.789 Removing: /var/run/dpdk/spdk_pid60747 00:33:52.789 Removing: /var/run/dpdk/spdk_pid60889 00:33:52.789 Removing: /var/run/dpdk/spdk_pid60924 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61083 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61370 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61563 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61682 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61804 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61931 00:33:52.789 Removing: /var/run/dpdk/spdk_pid61981 00:33:52.789 Removing: /var/run/dpdk/spdk_pid62023 00:33:52.789 Removing: /var/run/dpdk/spdk_pid62096 00:33:52.789 Removing: /var/run/dpdk/spdk_pid62233 00:33:52.789 Removing: /var/run/dpdk/spdk_pid62889 00:33:52.789 Removing: /var/run/dpdk/spdk_pid62982 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63079 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63117 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63280 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63314 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63477 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63505 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63590 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63620 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63695 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63736 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63949 00:33:52.789 Removing: /var/run/dpdk/spdk_pid63995 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64081 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64193 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64235 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64326 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64383 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64439 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64490 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64540 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64595 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64647 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64704 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64749 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64805 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64862 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64913 00:33:53.046 Removing: /var/run/dpdk/spdk_pid64964 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65015 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65071 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65121 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65172 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65231 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65286 00:33:53.046 Removing: /var/run/dpdk/spdk_pid65336 00:33:53.047 Removing: /var/run/dpdk/spdk_pid65393 00:33:53.047 Removing: /var/run/dpdk/spdk_pid65481 00:33:53.047 Removing: /var/run/dpdk/spdk_pid65631 00:33:53.047 Removing: /var/run/dpdk/spdk_pid66091 00:33:53.047 Removing: /var/run/dpdk/spdk_pid69715 00:33:53.047 Removing: /var/run/dpdk/spdk_pid70087 00:33:53.047 Removing: /var/run/dpdk/spdk_pid71282 00:33:53.047 Removing: /var/run/dpdk/spdk_pid71672 00:33:53.047 Removing: /var/run/dpdk/spdk_pid71974 00:33:53.047 Removing: /var/run/dpdk/spdk_pid72015 00:33:53.047 Removing: /var/run/dpdk/spdk_pid72951 00:33:53.047 Removing: /var/run/dpdk/spdk_pid72997 00:33:53.047 Removing: /var/run/dpdk/spdk_pid73413 00:33:53.047 Removing: /var/run/dpdk/spdk_pid73961 00:33:53.047 Removing: /var/run/dpdk/spdk_pid74413 00:33:53.047 Removing: /var/run/dpdk/spdk_pid75447 00:33:53.047 Removing: /var/run/dpdk/spdk_pid76473 00:33:53.047 Removing: /var/run/dpdk/spdk_pid76607 00:33:53.047 Removing: /var/run/dpdk/spdk_pid76687 00:33:53.047 Removing: /var/run/dpdk/spdk_pid78219 00:33:53.047 Removing: /var/run/dpdk/spdk_pid78515 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79003 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79113 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79277 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79329 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79387 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79439 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79627 00:33:53.047 Removing: /var/run/dpdk/spdk_pid79786 00:33:53.047 Removing: /var/run/dpdk/spdk_pid80092 00:33:53.047 Removing: /var/run/dpdk/spdk_pid80228 00:33:53.047 Removing: /var/run/dpdk/spdk_pid80502 00:33:53.047 Removing: /var/run/dpdk/spdk_pid80652 00:33:53.047 Removing: /var/run/dpdk/spdk_pid80815 00:33:53.047 Removing: /var/run/dpdk/spdk_pid81179 00:33:53.047 Removing: /var/run/dpdk/spdk_pid81635 00:33:53.047 Removing: /var/run/dpdk/spdk_pid81968 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82505 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82508 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82870 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82893 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82909 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82942 00:33:53.047 Removing: /var/run/dpdk/spdk_pid82949 00:33:53.047 Removing: /var/run/dpdk/spdk_pid83273 00:33:53.047 Removing: /var/run/dpdk/spdk_pid83315 00:33:53.047 Removing: /var/run/dpdk/spdk_pid83669 00:33:53.047 Removing: /var/run/dpdk/spdk_pid83934 00:33:53.047 Removing: /var/run/dpdk/spdk_pid84456 00:33:53.047 Removing: /var/run/dpdk/spdk_pid85008 00:33:53.047 Removing: /var/run/dpdk/spdk_pid85632 00:33:53.047 Removing: /var/run/dpdk/spdk_pid85636 00:33:53.047 Removing: /var/run/dpdk/spdk_pid87624 00:33:53.047 Removing: /var/run/dpdk/spdk_pid87727 00:33:53.047 Removing: /var/run/dpdk/spdk_pid87830 00:33:53.047 Removing: /var/run/dpdk/spdk_pid87931 00:33:53.047 Removing: /var/run/dpdk/spdk_pid88118 00:33:53.047 Removing: /var/run/dpdk/spdk_pid88215 00:33:53.047 Removing: /var/run/dpdk/spdk_pid88322 00:33:53.047 Removing: /var/run/dpdk/spdk_pid88420 00:33:53.047 Removing: /var/run/dpdk/spdk_pid88808 00:33:53.047 Removing: /var/run/dpdk/spdk_pid89525 00:33:53.047 Removing: /var/run/dpdk/spdk_pid90904 00:33:53.047 Removing: /var/run/dpdk/spdk_pid91116 00:33:53.047 Removing: /var/run/dpdk/spdk_pid91414 00:33:53.047 Removing: /var/run/dpdk/spdk_pid91739 00:33:53.047 Removing: /var/run/dpdk/spdk_pid92319 00:33:53.047 Removing: /var/run/dpdk/spdk_pid92325 00:33:53.305 Removing: /var/run/dpdk/spdk_pid92725 00:33:53.305 Removing: /var/run/dpdk/spdk_pid92896 00:33:53.305 Removing: /var/run/dpdk/spdk_pid93062 00:33:53.305 Removing: /var/run/dpdk/spdk_pid93169 00:33:53.305 Removing: /var/run/dpdk/spdk_pid93328 00:33:53.305 Removing: /var/run/dpdk/spdk_pid93454 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94157 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94197 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94233 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94706 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94742 00:33:53.305 Removing: /var/run/dpdk/spdk_pid94779 00:33:53.305 Removing: /var/run/dpdk/spdk_pid95211 00:33:53.305 Removing: /var/run/dpdk/spdk_pid95246 00:33:53.305 Removing: /var/run/dpdk/spdk_pid95729 00:33:53.305 Clean 00:33:53.305 11:25:01 -- common/autotest_common.sh@1437 -- # return 0 00:33:53.305 11:25:01 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:53.305 11:25:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:53.305 11:25:01 -- common/autotest_common.sh@10 -- # set +x 00:33:53.305 11:25:01 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:53.305 11:25:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:53.305 11:25:01 -- common/autotest_common.sh@10 -- # set +x 00:33:53.305 11:25:01 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:53.305 11:25:01 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:53.305 11:25:01 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:53.305 11:25:01 -- spdk/autotest.sh@389 -- # hash lcov 00:33:53.305 11:25:01 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:53.305 11:25:01 -- spdk/autotest.sh@391 -- # hostname 00:33:53.564 11:25:01 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:53.564 geninfo: WARNING: invalid characters removed from testname! 00:34:20.137 11:25:25 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:21.511 11:25:29 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:24.065 11:25:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:26.603 11:25:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:29.188 11:25:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:32.472 11:25:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:35.002 11:25:42 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:35.002 11:25:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:35.002 11:25:42 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:35.002 11:25:42 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.002 11:25:42 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.002 11:25:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.002 11:25:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.003 11:25:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.003 11:25:42 -- paths/export.sh@5 -- $ export PATH 00:34:35.003 11:25:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.003 11:25:42 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:35.003 11:25:43 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:35.003 11:25:43 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713439543.XXXXXX 00:34:35.003 11:25:43 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713439543.UNyAK8 00:34:35.003 11:25:43 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:35.003 11:25:43 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:35.003 11:25:43 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:35.003 11:25:43 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:35.003 11:25:43 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:35.003 11:25:43 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:35.003 11:25:43 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:34:35.003 11:25:43 -- common/autotest_common.sh@10 -- $ set +x 00:34:35.003 11:25:43 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:34:35.003 11:25:43 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:34:35.003 11:25:43 -- pm/common@17 -- $ local monitor 00:34:35.003 11:25:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.003 11:25:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97420 00:34:35.003 11:25:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.003 11:25:43 -- pm/common@21 -- $ date +%s 00:34:35.003 11:25:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97423 00:34:35.003 11:25:43 -- pm/common@26 -- $ sleep 1 00:34:35.003 11:25:43 -- pm/common@21 -- $ date +%s 00:34:35.003 11:25:43 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713439543 00:34:35.003 11:25:43 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713439543 00:34:35.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713439543_collect-vmstat.pm.log 00:34:35.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713439543_collect-cpu-load.pm.log 00:34:35.937 11:25:44 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:34:35.937 11:25:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:35.937 11:25:44 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:35.937 11:25:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:35.937 11:25:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:35.937 11:25:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:35.937 11:25:44 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:35.937 11:25:44 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:35.937 11:25:44 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:35.937 11:25:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:35.937 11:25:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:35.937 11:25:44 -- pm/common@30 -- $ signal_monitor_resources TERM 00:34:35.937 11:25:44 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:34:35.937 11:25:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.937 11:25:44 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:35.937 11:25:44 -- pm/common@45 -- $ pid=97429 00:34:35.937 11:25:44 -- pm/common@52 -- $ sudo kill -TERM 97429 00:34:35.937 11:25:44 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:35.937 11:25:44 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:35.937 11:25:44 -- pm/common@45 -- $ pid=97428 00:34:35.937 11:25:44 -- pm/common@52 -- $ sudo kill -TERM 97428 00:34:36.196 + [[ -n 5147 ]] 00:34:36.196 + sudo kill 5147 00:34:36.205 [Pipeline] } 00:34:36.222 [Pipeline] // timeout 00:34:36.227 [Pipeline] } 00:34:36.245 [Pipeline] // stage 00:34:36.250 [Pipeline] } 00:34:36.269 [Pipeline] // catchError 00:34:36.278 [Pipeline] stage 00:34:36.280 [Pipeline] { (Stop VM) 00:34:36.295 [Pipeline] sh 00:34:36.571 + vagrant halt 00:34:40.754 ==> default: Halting domain... 00:34:47.324 [Pipeline] sh 00:34:47.602 + vagrant destroy -f 00:34:51.787 ==> default: Removing domain... 00:34:51.796 [Pipeline] sh 00:34:52.072 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:34:52.082 [Pipeline] } 00:34:52.098 [Pipeline] // stage 00:34:52.104 [Pipeline] } 00:34:52.119 [Pipeline] // dir 00:34:52.124 [Pipeline] } 00:34:52.142 [Pipeline] // wrap 00:34:52.149 [Pipeline] } 00:34:52.164 [Pipeline] // catchError 00:34:52.174 [Pipeline] stage 00:34:52.176 [Pipeline] { (Epilogue) 00:34:52.192 [Pipeline] sh 00:34:52.474 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:59.051 [Pipeline] catchError 00:34:59.053 [Pipeline] { 00:34:59.067 [Pipeline] sh 00:34:59.345 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:59.603 Artifacts sizes are good 00:34:59.611 [Pipeline] } 00:34:59.626 [Pipeline] // catchError 00:34:59.637 [Pipeline] archiveArtifacts 00:34:59.643 Archiving artifacts 00:34:59.832 [Pipeline] cleanWs 00:34:59.843 [WS-CLEANUP] Deleting project workspace... 00:34:59.843 [WS-CLEANUP] Deferred wipeout is used... 00:34:59.848 [WS-CLEANUP] done 00:34:59.850 [Pipeline] } 00:34:59.867 [Pipeline] // stage 00:34:59.871 [Pipeline] } 00:34:59.886 [Pipeline] // node 00:34:59.891 [Pipeline] End of Pipeline 00:34:59.934 Finished: SUCCESS